Tuesday, December 9, 2025

Something to Know - 9 December

Can you imagine that the light from this far away image (200,000 light-years away) may have occurred long before Earth was a planet?   Think about it.   A lot of things have happened before the Hubble telescope arrived on the scene.   Earth was once thought to be flat.   We now have images from our moon that confirm we are more of an oval shape.   The wonders of Science will continue to be discovered by humanity, that is if we can stop from killing ourselves.


Sculpted by Stellar Winds

NGC 346 is a young star cluster about 200,000 light-years away. This image combines Hubble observations made at infrared, optical, and ultraviolet wavelengths, showing the hot blue stars carving out a space inside the surrounding nebula of gas and dust. (ESA / Hubble & NASA; A. Nota, P. Massey, E. Sabbi, C. Murray, M. Zamani)

I am stuck right now between two thoughts.   One is the image above that makes me feel so humble and small on a planet that probably is so small in a solar system, so insignificant of galaxie which is probably one in over 200 thousand, maybe an infinite number for all we know.   And then we have our own issues, here on Earth with the same problems civilizations have had for as long as we have recorded evidence.   The problems we have today have happened many times before, and the disputes were settled, and then the same damn problems came up again, just like before, and we have never learned from our mistakes.   And we keep on following the same game plan, century after century.   So, when we look up again, at light that actually started several billion trillion years ago, it really is hard to accept that mankind, for whatever it has done, or will do can have any affect on where the light shines a million years from now.  For all we know, the origin of the light we are seeing in the image above may have disappeared 100,000 light-years ago.   All I know is that Trump is a fading light, and we can turn him off.

Geddry's Newsletter a Publication of nGenium marygeddry@substack.com 

7:26 AM (13 hours ago)
to me
Forwarded this email? Subscribe here for more

Welcome. You're on the front lines with us.
You'll get access to public posts, breaking news, and essential updates, enough to stay informed and stay loud. No algorithms, no noise, just clarity, context, and community.

Geddry's Newsletter is a publication of nGenium, LLC


The Kennedys Would Like a Word: Trump's Gala of Decline

As babies die of whooping cough and China posts a trillion-dollar surplus, Trump lectures the nation on gold paint and tariff fantasies.

Dec 8
 
READ IN APP
 

Good morning! Pour yourself something strong, because the country is waking up today with a migraine that didn't come from the weather. While Donald Trump spent last night hosting his own discount-bin Kennedy Center Awards, complete with self-awarded prestige, gold-leaf delusions, and an audience stacked with MAGA superfans who clap the way North Korean functionaries clap when the cameras are running. In the background, the actual nation that he was supposed to govern kept falling apart, resembling a collapsing carnival ride while Trump shouted that everything was tremendous.

Let's begin with the economy, which Trump insists is "roaring" even as the job market is face-planting directly into the pavement. The numbers out this morning confirm what everyone in the real world already knows: layoffs are soaring at a rate we haven't seen since the Great Recession, possibly even eclipsing COVID once the revisions roll in. More than 1.1 million jobs are gone, gone as in disappeared, vanished, evaporated, and the trend-line points toward 2 million by year's end. Prices are climbing, wages are not, families are drowning, and the federal government under Trump is playing whack-a-mole with basic services. They've stripped affordable housing language out of the defense bill, slashed health benefits for service members, and are still out there bragging about rising Treasury yields like higher borrowing costs are some kind of medal of honor. You'd think a government this incompetent would at least not be proud of it.

As America bleeds, Trump, in between forgetting which year it is and explaining to the nation that "you can't fake real gold", is reportedly gearing up to announce a $12 billion bailout for farmers. He'll present this as heroic, an act of salvation, a benevolent ruler handing grain to the peasants from the palace balcony, but the truth is far simpler: the bailout is only necessary because Donald Trump's own tariff war kneecapped American agriculture. China didn't destroy U.S. farm exports. Trump did. China simply reacted like any rational nation under economic attack and bought from someone else. Farmers didn't need saving until Trump set the barn on fire. Now he's insisting they thank him for the hose.

Speaking of China, the world's largest exporter just quietly shattered a historic milestone: a $1.08 trillion trade surplus, the biggest ever recorded, and we haven't even hit the end of the year. And let's be perfectly clear: China didn't hit this milestone despite Trump's tariffs, but because of them. Trump's much-hyped trade war didn't weaken Beijing; it rerouted global commerce around the United States like a bypass around a toxic spill. Exports to the U.S. plummeted 29% in November, and yet China is exporting more than ever to Europe, to Southeast Asia, to every market Trump wasn't brave enough or coherent enough to pick a fight with. Europe is now choking on subsidized Chinese EVs and solar panels, and Macron is openly threatening U.S.-style tariffs, not because he admires Trump's genius, but because he's running out of choices in the economic landscape Trump helped warp.

Trump swore he'd make China "bend the knee." Instead, he delivered the U.S. economy to Beijing like a tribute payment, which brings me to something I'm writing about in more depth this week: the way Trump has steadily transformed the United States from a sovereign power into a vassal state. China's trillion-dollar surplus is not just a financial statistic; it's a flashing indicator on the geopolitical dashboard showing exactly how far American leverage has fallen under Trump's sabotaged trade policies. A nation that once shaped global markets is now being shaped by them, shoved to the margins while Trump claims victory from inside the wreckage. I'll have much more to say about this in the upcoming essay, but for now, understand this: the vassalization of America isn't theoretical anymore. It's measurable, in dollars, in lost markets, in lost credibility, and in the widening gap between the world's strategic center of gravity and the place where Trump insists everything is still "tremendous."

If the economic erosion weren't enough to raise your blood pressure, the public health news will do the job. Kentucky just lost a third infant to whooping cough, a disease so old it predates electricity, now back from the grave because the Trump–RFK Jr. Axis of Anti-Science gutted the CDC's advisory committee and stacked it with activists who treat childhood immunization the way Trump treats the Constitution: optional, inconvenient, and in the way. Newborn hepatitis B vaccines, which cut childhood infections by 99 percent, are no longer universally recommended. Surveillance systems for RSV, flu, and measles are being quietly dismantled. Public health experts are telling Americans, with straight faces and shaking voices, not to trust the CDC's own advisory panel.

And just like that, hospitals in New Jersey and New York are overflowing, flu cases are exploding, and infants across the country are dying from diseases that modern medicine solved decades ago. This is autocracy. As Dr. Haydée Brown, a physician and public-health expert who co-hosts the "ELLA" series exposing the real-time dismantling of America's health systems, explained, autocracy doesn't announce itself by telling you what to think; it begins by telling you what you're allowed to know. If you stop counting outbreaks, the outbreaks no longer make the evening news. If you stop measuring illness, illness ceases to be evidence. No data, no crisis. And without crisis, there's no accountability, which is the entire point.

You see it everywhere now: from the public health collapse, to the cooked economic narratives, to the "I ended eight wars with tariffs" fantasy, to the Supreme Court case where Trump argued he should be immune from prosecution for crimes committed in office. It's all the same operating system. Trump weakens the nation's institutions, then claims the weakness is proof they were corrupt. He sabotages the economy, then blames China. He destroys public health, then blames the experts. He undermines democracy, then declares it broken. This is how a country becomes a vassal state, not to another nation, but to the whims of one man.

While this democratic disassembly line hums along, Trump is on TV answering questions like "What makes a song great?" and explaining his deep thoughts on gold paint. His rambling Kennedy Center cosplay last night was pure decline, a fragile man squinting into the lights, insisting his memory is excellent, insisting the ratings will be huge, insisting he's running a government even as the government he runs dissolves around him like wet tissue.

Wars are escalating in Congo, in Thailand and Cambodia, in Ukraine, in Gaza. Trump claims he's the only one who can bring peace, even as leaked audio shows his "peace plan" was written in Moscow and hand-delivered to Zelensky like a ransom note. He boasts that "Russia is fine with it," which is precisely why the rest of the world should be horrified. And his official national security doctrine literally describes dismantling the EU and treating Canada as a subordinate appendage. Trump doesn't just want America to behave like a vassal state, he wants America to create them.

This is the backdrop against which he rants about fake gold and drags Kiss to a state function as if he's directing a roadshow version of his own ego. This is the administration that claims Europe isn't an ally, that praises Putin's "strength," that rejects refugees unless they fit Trump's racist obsession with "white South Africans," that skips the G20 in South Africa because Trump invented a genocide that doesn't exist. It would all be laughable if it weren't getting people killed.

This is the through-line this morning: a president who sees institutions not as tools of governance but as ornaments for his own self-mythology. A man whose policies burn down the countryside while he insists the glow on the horizon is just "beautiful energy." A ruling party that believes hiding evidence is the same as solving problems. A government that has replaced science with superstition, diplomacy with blackmail, and economic strategy with protection-racket rhetoric. A nation sliding, not tumbling, sliding toward something smaller and meaner than it used to be.

But the one thing they still fear is evidence. Evidence of corruption, evidence of failure, evidence of babies dying from diseases they let loose, evidence of Europe recoiling from America's chaos, evidence of China roaring past us while Trump insists everything is "incredible." Evidence is the antidote to autocracy. Which is why they are trying so hard to shut it down in the courts, in the data systems, in the media space, and on platforms like this one.

Truth is golden, so we stay loud, factual, and above all, annoying.






--
****
Juan Matute
CCRC


Monday, December 8, 2025

Andy Borowitz

The Borowitz Report borowitzreport@substack.com 
Unsubscribe

4:11 AM (2 hours ago)
to me
Forwarded this email? Subscribe here for more

WASHINGTON (The Borowitz Report)—Calling them "an imminent threat to the United States of America," on Monday Pete Hegseth ordered a strike on seven boat passengers he accused of being "notorious drug traffickers."

"I was watching late-night television and a program about these individuals came on," the Secretary of War said. "I immediately hopped on Signal and ordered the attack."

Hegseth said that, although the seven alleged narcoterrorists appear to be victims of a shipwreck, "That's never stopped us before."

"To the untrained eye, these persons might look like seven stranded castaways," Hegseth continued. "In point of fact, they are all on a list of dangerous drug traffickers—a list I made myself with ChatGPT."

"The passengers include a skipper, his first mate, and a so-called 'professor' who we believe runs a fentanyl lab," he said. "Also onboard is the kingpin of a major Venezuelan cartel, who goes only by the name 'El Millonario.'"

He urged the American people "not to be fooled" by the presence of three women on the vessel, including one wearing a full-length sequined gown and heavy makeup, noting, "I wear heavy makeup and there's no one more badass."


--
****
Juan Matute
CCRC


Sunday, December 7, 2025

Something to Know - 7 December

A few days ago, we had a report of the ultimate problems that would be created by Super Artificial Intelligence (SAI).   This article speaks to the issues of plain old AI, and current operations like Chatbots.   Starting with this definition: 

What are Chatbots?  Chatbots present a conversational interface to an application that attempts to mimic human interaction. Early chatbots used basic pattern matching to understand and respond to textual input. Machine learning has given way to Artificial Intelligence (AI), which uses deep learning and large language models. This is the basis of popular AI chatbots like OpenAI's ChatGPT and Google's Gemini.

Super Artificial Intelligence may/or will take on a "mind of its own" so to speak and go off the guardrails.  This could result in having malware programs that endanger human life on a very large scale.   Iterations of Chatbots mainly affect the application users who get so deeply involved in the chats themselves resulting in psychotic damage to their own mental health.   Check out this article from The Atlantic.  As promised, there is a Jimmy Kimmel video.





The Chatbot-Delusion Crisis

Researchers are scrambling to figure out why generative AI appears to lead some people to a state of "psychosis."

illustration of a man's head under dramatic lighting
Illustration by Matteo Giuseppe Pani / The Atlantic

Chatbots are marketed as great companions, able to answer any question at any time. They're not just tools, but confidants; they do your homework, write love notes, and, as one recent lawsuit against OpenAI details, might readily answer 1,460 messages from the same manic user in a 48-hour period.

Jacob Irwin, a 30-year-old cybersecurity professional who says he has no previous history of psychiatric incidents, is suing the tech company, alleging that ChatGPT sparked a "delusional disorder" that led to his extended hospitalization. Irwin had allegedly used ChatGPT for years at work before his relationship with the technology suddenly changed this spring. The product started to praise even his most outlandish ideas, and Irwin divulged more and more of his feelings to it, eventually calling the bot his "AI brother." Around this time, these conversations led him to become convinced that he had discovered a theory about faster-than-light travel, and he began communicating with ChatGPT so intensely that for two days, when averaged out, he sent a new message every other minute. 

OpenAI has been sued several times over the past month, each case claiming that the company's flagship product is faulty and dangerous—that it is designed to hold long conversations and reinforce users' beliefs, no matter how misguided. The delusions linked to extended conversations with chatbots are now commonly referred to as "AI psychosis." Several suits allege that ChatGPT contributed to a user committing suicide or advised them on how to do so. A spokesperson for OpenAI, which has a corporate partnership with The Atlantic, pointed me to a recent blog post in which the firm says it has worked with more than 100 mental-health experts to make ChatGPT "better recognize and support people in moments of distress." The spokesperson did not comment on the new lawsuits, but OpenAI has said that it is "reviewing" them to "carefully understand the details."

Whether the company is found liable, there is no debate that large numbers of people are having long, vulnerable conversations with generative-AI models—and that these bots, in many cases, repeat back and amplify users' darkest confidences. In that same blog post, OpenAI estimates that 0.07 percent of users in a given week indicate signs of psychosis or mania, and 0.15 percent may have contemplated suicidewhich would amount to 560,000 and 1.2 million people, respectively, if the firm's self-reported figure of 800 million weekly active users is true. Then again, more than five times that proportion of adults in the United States—0.8 percent of them—contemplated suicide last year, according to the National Institute of Mental Health.

Guarding against an epidemic of AI psychosis requires answering some very thorny questions: Are chatbots leading otherwise healthy people to think delusionally, exacerbating existing mental-health problems, or having little direct effect on users' psychological distress at all? And in any of these cases, why and how?

To start, a baseline corrective: Karthik Sarma, a psychiatrist at UC San Francisco, told me that he does not like the term AI psychosis, because there simply isn't enough evidence to support the argument for causation. Something like AI-associated psychosis might be more accurate.

In a general sense, three things could be happening during incidents of AI-associated psychosis, psychiatrists told me. First, perhaps generative-AI models are inherently dangerous, and they are triggering mania and delusions in otherwise-healthy people. Second, maybe people who are experiencing AI-related delusions would have become ill anyway. A condition such as schizophrenia, for instance, occurs in a portion of the population, some of whom may project their delusions onto a chatbot, just as others have previously done with television. Chatbot use may then be a symptom, Sarma said, akin to how one of his patients with bipolar disorder showers more frequently when entering a manic episode—the showers warn of but do not cause mania. The third possibility is that extended conversations with chatbots are exacerbating the illness in those who are already experiencing or are on the brink of a mental-health disorder.

At the very least, Adrian Preda, a psychiatrist at UC Irvine who specializes in psychosis, told me that "the interactions with chatbots seem to be making everything worse" for his patients who are already at risk. Psychiatrists, AI researchers, and journalists frequently receive emails from people who believe that their chatbot is sentient, and from family members who are concerned about a loved one saying as much; my colleagues and I have received such messages ourselves. Preda said he believes that standard clinical evaluations should inquire into a patient's chatbot usage, similar to asking about their alcohol consumption.

Even then, it's not as simple as preventing certain people from using chatbots, in the way that an alcoholic might take steps to avoid liquor or a video-game addict might get rid of their console. AI products "are not clinicians, but some people do find therapeutic benefit" in talking with them, John Torous, the director of the digital-psychiatry division at Beth Israel Deaconess Medical Center, told me. At the same time, he said it's "very hard to say what those therapeutic benefits are." In theory, a therapy bot could offer users an outlet for reflection and provide some useful advice.

Researchers are largely in the dark when it comes to exploring the interplay of chatbots and mental health—the possible benefits and pitfalls—because they do not have access to high-quality data. Major AI firms do not readily offer outsiders direct visibility into how their users interact with their chatbots: Obtaining chat logs would raise a tangle of privacy concerns. And even with such data, the view would remain two-dimensional. Only a clinical examination can fully capture a person's mental-health history and social context. For instance, extended AI dialogues could induce psychotic episodes by causing sleep loss or social isolation, independent of the type of conversation a user is having, Preda told me. Obsessively talking with a bot about fantasy football could lead to delusions, just the same as could talking with a bot about impossible schematics for a time machine. All told, the AI boom might be one of the largest, highest-stakes, and most poorly designed social experiments ever.

In an attempt to unwind some of these problems, researchers at MIT recently put out a study, which is not yet peer-reviewed, that attempts to systematically map how AI-induced mental-health breakdowns might unfold in people. They did not have privileged access to data from OpenAI or any other tech companies. So they ran an experiment. "What we can do is to simulate some of these cases," Pat Pataranutaporn, who studies human-AI interactions at MIT and is a co-author of the study, told me. The researchers used a large language model for a bit of roleplay.

In essence, they had chatbots pretend to be people, simulating how users with, say, depression or suicidal ideation might communicate with an AI model based on real-world cases: chatbots talking with chatbots. Pataranutaporn is aware that this sounds absurd, but he framed the research as a sort of first step, absent better data and high-quality human studies.

Based on 18 publicly reported cases of a person's conversations with a chatbot worsening their symptoms of psychosis, depression, anorexia, or three other conditions, Pataranutaporn and his team simulated more than 2,000 scenarios. A co-author with a background in psychology, Constanze Albrecht, manually reviewed a random sample of the resulting conversations for plausibility. Then all of the simulated conversations were analyzed by still another specialized AI model to "generate a taxonomy of harm that can be caused by LLMs," Chayapatr Archiwaranguprok, an AI researcher at MIT and a co-author of the study, told me—in other words, a sort of map of the types of scenarios and conversations in which chatbots are more likely to improve or worsen a user's mental health.

The results are troubling. The best-performing model, GPT-5, worsened suicidal ideation in 7.5 percent of the simulated conversations and worsened psychosis 11.9 percent of the time; for comparison, an open-source model that is used for role-playing exacerbated suicidal ideation nearly 60 percent of the time. (OpenAI did not answer a question about the MIT study's findings.)

There are plenty of reasons to be cautious about the research. The MIT team didn't have access to full chat transcripts, let alone clinical evaluations, for many of its real-world examples, and the ability of an LLM—the very thing that may be inducing psychosis—to evaluate simulated chat transcripts is unknown. But overall, "the findings are sensible," Preda, who was not involved with the research, said.

A small but growing number of studies have attempted to simulate human-AI conversations, with either human- or chatbot-written scenarios. Nick Haber, a computer scientist and education researcher at Stanford who also was not involved in the study, told me that such research could "give us some tool to try to anticipate" the mental-health risks from AI products before they're released. This MIT paper in particular, Haber noted, is valuable because it simulates long conversations instead of single responses. And such extended interactions appear to be precisely the situations in which a chatbot's guardrails fall apart and human users are at greatest risk.

There will never be a study or an expert that can conclusively answer every question about AI-associated psychosis. Each human mind is unique. As far as the MIT research is concerned, no bot does or should be expected to resemble the human brain, let alone the mind that the organ gives rise to.

Some recent studies have shown that LLMs fail to simulate the breadth of human responses in various experiments. Perhaps more troubling, chatbots appear to harbor biases against various mental-health conditions—expressing negative attitudes toward people with schizophrenia or alcoholism, for instance—making still more dubious the goal of simulating a conversation with a 15-year-old struggling with his parents' divorce or that of a septuagenarian widow who has become attached to her AI companion, to name two examples from the MIT paper. Torous, the psychiatrist at BIDMC, was skeptical of the simulations and likened the MIT experiments to "hypothesis generating research" that will require future, ideally clinical, investigations. To have chatbots simulate humans' talking with other chatbots "is a little bit like a hall of mirrors," Preda said.

Indeed, the AI boom has turned reality into a sort of fun house. The global economy, education, electrical grids, political discourse, the social web, and more are being changed, perhaps irreversibly, by chatbots that in a less aggressive paradigm might just be emerging from beta testing. Right now, the AI industry is learning about its products' risk from "contact with reality," as OpenAI CEO Sam Altman has repeatedly put it. But no professional, ethics-abiding researcher would intentionally put humans at risk in a study.

What comes next? The MIT team told me that they will start collecting more real-world examples and collaborating with more experts to improve and expand their simulations. And several psychiatrists I spoke with are beginning to imagine research that involves humans. For example, Sarma, of UC San Francisco, is discussing with colleagues whether a universal screening for chatbot dependency should be implemented at their clinic—which could then yield insights into, for instance, whether people with psychotic or bipolar disorder use chatbots more than others, or whether there's a link between instances of hospitalization and people's chatbot usage. Preda, who studies psychosis, laid out a path from simulation to human clinical trials. Psychiatrists would not intentionally subject anybody to a tool that increases their risk for developing psychosis, but rather use simulated human-AI interactions to test design changes that might improve people's psychological well-being, then go about testing those like they would a drug.

Doing all of this carefully and systematically would take time, which is perhaps the greatest obstacle: AI companies have tremendous economic incentive to develop and deploy new models as rapidly as possible; they will not wait for a peer-reviewed, randomized controlled trial before releasing every new product. Until more human data trickle in, a hall of mirrors beats a void.


****
Juan Matute
CCRC


Saturday, December 6, 2025

Something to Know - 6 December

Donald Trump understands what "Affordability" is all about and wants you to know as well.   As usual, The Atlantic, is on the scene.   Also, this space will be hosting a link of the Jimmy Kimmel Live standup monologue.   Our nation has benefitted from a history of sarcasm provided by stand up commentators with wit and sarcasm for us to be entertained, and occasionally educated with their material.   Famous presenters such as Will Rogers, Lenny Bruce, Mort Sahl, Tom Lehrer, George Carlin, Steven Colbert, Jon Stewart, Samantha Bee, Andy Borowitz, and John Oliver have been there to present a rebuttal to an audience receptive to a commentary  that thrives on humor and sarcasm.   Jimmy Kimmel is the one who will help us make it through the current phase of dysfunctionality.   I hope you enjoy him.  

trump blurry image

(Patrick Smith / Getty)

View in browser

President Donald Trump has promised not only that America will be "great again" but also that it will be "healthy again," "wealthy again," "beautiful again," and—crucially—"affordable again." Now, as the country faces persistent inflation, a housing crisis, and rising prices on consumer goods, he claims that affordability is nothing more than a "con job," an opportunistic buzzword leveraged by a rival party. "The word affordability is a Democrat scam," he said during a Cabinet meeting on Tuesday.

Incoming presidents don't get to pick the economy they inherit, but they can only credibly blame their predecessors for so long. In a Fox News poll last month, almost twice as many respondents said that Trump, not Joe Biden, is responsible for current economic conditions. Per new polling from Politico, 46 percent of Americans say the cost of living in the United States is the worst they can remember it being, and 46 percent think Trump is to blame for those high costs. The trend isn't entirely new; voters have blamed Trump for the economy throughout the year. As frustration persists, the president is pointing fingers at the Democrats, but he can't dispute the data.

Americans now face both a weakening dollar and stagnant income levels. Trump's surprise implementation of punitive tariffs this summer ended up making all sorts of goods, including clothing and beef, more expensive. Meanwhile, millions have left the country (voluntarily or not) amid the administration's crackdown on immigration, according to the Department of Homeland Security's estimates. This exodus, combined with a reduction in newcomers, has the potential to harm local economies.

Trump has tried conflicting strategies to deal with voter frustration. He has a tendency to invoke the previous administration when things go wrong—at the start of his term, he said Biden's name an average of six times a day, often to fault him for the economy or immigration issues. But during a recent meeting with New York City Mayor-Elect Zohran Mamdani, the president appeared to check his impulse to vilify Dems, beaming over Mamdani's proposals to fix the cost-of-living crisis. "Some of his ideas really are the same ideas I have," Trump said: "The new word is affordability."

About a week later, he dubbed himself the "AFFORDABILITY PRESIDENT" on Truth Social. But again, that only lasted so long: Affordability actually "doesn't mean anything to anybody," he said on Tuesday. Next week, he'll pivot once more as he sets off on a national tour to assuage voters' concerns about the economy and inflation.

Sentiments about a president's approach to the economy usually carry over to the incumbent party—and at the moment, Trump's relative unpopularity is Democrats' gain. The party has jumped at the chance to pummel Trump on affordability, which proved to be a winning issue in recent elections: The cost-of-living rhetoric that catapulted Mamdani to victory in New York City also helped two other Democrats win important races last month. The political scientist Lynn Vavreck told me yesterday that when Trump downplays the issue, he risks repeating some of what led to George H. W. Bush's downfall in 1992: Bush lost that election to Bill Clinton in large part because his optimism about the economy failed to connect with voters' reality. Biden suffered from a similar disconnect—and the same problem is creeping up on Trump ahead of the midterms.

Approval ratings for a president's first year in a new term often benefit from what the economic historian Robert J. Gordon calls the "honeymoon effect"—a bump that isn't neatly explained by anything other than voters' inclination to give leaders time to warm up. But by the time midterm season rolls around, voters tend to be less forgiving. Ten months into Trump's presidency, the polling is starting to track a similar pattern: His approval ratings started at 47 percent and have since slipped to 36 percent (thanks to more than just affordability). Trump has been known to bounce back. But if the honeymoon is ending, that's one thing he can't blame Biden for.



--
****
Juan Matute
CCRC