UX Roundup: Auto-Translating Social Media | Bollywood AI | Context Reboarding | Agents Almost Equal Search | Headless Software | Not a Loser
- Jakob Nielsen

- 2 hours ago
- 20 min read
Summary: Translating social media posts by default makes online communities international | Indian film studios embrace AI | AI summaries rescue users from context decay | AI agents now account for almost as much website traffic as human search | Software customers demand agent-first design | “You Are Not Talking to Somebody Who Woke Up a Loser”

UX Roundup for April 20, 2026 (Nano Banana 2)
AI Facilitates International Understanding and Community
The social media platform X (previously known as Twitter) launched a new feature a few weeks ago: automatic translation of posts from the author’s language to the user’s language. For example, since I am logged in from the United States, I see all posts in English, even when they were originally written in another language, such as Japanese or Arabic. (You can set preference options to retain the original wording for posts in additional languages you read, such as Danish, in my case.)
The translations are done in the background by Grok (another xAI product) and are excellent. The English versions of posts are always fluent and eloquent, and I have not been able to distinguish them from those of native-language speakers. If anything, posts by Americans and Brits contain more typos and awkward phrasings than Grok’s translation of foreign-language posts.
The translations are highly accurate, at least when I have compared a translated post with the original in a language I read. (X offers a one-click option on all translated posts to reveal the original text.)

A possible next step may be to not just translate the original text but improve it. However, I don’t recommend that. (Nano Banana Pro)
Translations are not just from other languages into English. The translations work between any pair of languages, so that, for example, a user in Korea could post in Korean and a user in Brazil would see that post in Portuguese.
It has long been possible to paste any text into an AI-powered translator, whether via the standard chatbot UI or a specialized translation service. What’s different with the new X translations is that they have already been automatically applied to all posts before you see them, meaning any user’s social feed is truly international by default. That little extra step required in the past was enough to prevent most users from translating most posts, and in fact, most social media platforms would hide foreign-language posts by default.
Auto-translations place users in all countries on an equal footing, whereas in the past, social media provided a substantial advantage to users in English-speaking countries. First, writing in your native language is always easier, even for people with foreign-language skills. Being able to do so encourages more people from more countries to post. Second, people almost always express themselves better in their native language than in a foreign language. This means that in the past, posts from foreigners felt less polished than posts from native-language speakers. This again made readers feel, subconsciously, that foreigners were stupid because they didn’t write as well.
Of course, translation doesn’t make a stupid person smart, but it does make all users, from all countries, appear as smart as they actually are, without the degraded communication abilities that come from trying to express oneself in a foreign language.
Will auto-translation on X bring world peace? No. But it does increase international understanding and makes the social media platform more interesting. Score one for AI making a positive contribution to society.

Translation by default = more international communication, for a truly worldwide community on social media. (Nano Banana Pro)
Bollywood Increasingly Using AI For Its Films
Continuing the topic from the previous news item of how AI translation broadens information access: India has the world’s largest film industry, and Bollywood studios are now using AI to translate, dub, and lip-sync movies into multiple languages.

Bollywood is going all in with AI. (Nano Banana Pro)
India has around 22 languages with more than 5 million native speakers (a number I picked because it’s the number of native speakers of my native language, Danish). Yes, Hindi is much the biggest with 528 M native speakers, but whether from a business perspective or a cultural perspective, you don’t want your films to miss out on ticket sales to people who use Bengali (97 M speakers), Marathi (83 M), Telugu (81 M), and so forth.

Film the actress once, and she will be a native speaker of 22 different languages in the final movie. (Nano Banana Pro)
Reuters recently had an interesting article about the use of AI in the Indian film industry. Converting films from monolingual to multilingual virtually for free stood out to me. However, AI is used much more widely in India than in the American film industry, which is chained down by ridiculous union rules. The article quotes industry insiders who estimate that current AI can slash production costs to 20% of traditional budgets and cut timelines by a quarter. (And we absolutely expect the next generation of video AI tools to do better.)
AI is particularly popular for mythological and religious films, which makes sense given the appearance of many important Hindu gods.

Films based on mythological stories are extremely popular in India, and are immensely cheaper to make with AI. (Nano Banana Pro)
Naturally, early AI adoption suffers from teething problems. JioStar recently released an AI-generated adaptation of the epic Mahabharat. Its reception? A dismal 1.4 out of 10 on IMDb, with users complaining about lip-sync issues and unnatural styling.

Reach for the future. Don’t get left behind with yesterday’s tools. This applies to anybody, even if you’re not making movies. (Nano Banana Pro)
Right now, Indian AI films may still not be as good as the (much more expensive) films made the old-fashioned way, but we should remember that what’s now considered old-school filmmaking was once innovative and suffered its own quality problems. What’s more interesting is to consider that using our new capabilities for full-scale creative projects is the way to grow a community of skilled creators who understand the new media form. These people will be better positioned to create truly great work in a few years, as AI improves, than latecomers, who won’t start gaining creative experience until it’s too late to catch up.

You become good at using AI by using AI. You become creative at using AI by creating with AI. Countries with fewer inhibitions against innovation will dominate the future. (Nano Banana Pro)
Reuters quotes Dominic Lees (a British film school professor) as saying, “If they can deliver, then the shift in AI filmmaking will be to India.” I think he may just be right.

The future belongs to those who show up. In creative filmmaking, this may be India. (Nano Banana Pro)

Whatever else happens, audiences will get more, better, and cheaper entertainment as the use of AI in creative industries expands. (Nano Banana Pro)
Context Reboarding: How AI Summaries Rescue Us from Context Decay
Work fragmentation is the curse of the modern knowledge worker. You are deep in a complex task, an urgent email demands your attention, and you step away. When you return, you stare blankly at your screen, suffering from cognitive discontinuity. You wonder, “What on earth was I trying to do?”

Any interruption to a focused task causes a substantial disruption to the task flow when the user tries to resume the task. (Nano Banana 2)
To successfully resume an interrupted task, users must rebuild two distinct mental structures:
Working context: The physical artifacts. Which files were open? What documentation was I reading?
Mental context: The overarching goal. What was my intent? Why was I looking at these specific artifacts?
For decades, we have forced users to either write meticulous manual notes or waste precious cognitive cycles rereading their own work to reconstruct their mental model. Both are severe usability anti-patterns, relying on human discipline and perfect memory, neither of which exist in the real world. But recent advances in AI offer a superior interaction paradigm.
A fascinating new paper by Alexander Lill and colleagues from the University of Zurich in Switzerland, TaCoS: Generated Context Summaries for Task Resumption, brings rigorous empirical data to this exact problem, proving that AI can bridge the cognitive gap.
The Experiment: Testing Resumption Cues
The researchers built an IDE (Integrated Development Environment) extension that quietly monitors a software developer’s activity: code edits, file navigation, and browser history. When the developer returns from an interruption, the AI generates a Task Context Summary, featuring the inferred user intent and a structured list of recent actions.
To test this, the researchers ran a controlled lab experiment with 32 software developers. They interrupted the programmers mid-task and forced them to resume 1 to 7 days later, ensuring total context decay. Each participant tested three resumption cues:
Manual Notes: The traditional method, where users typed their own reminder before leaving.
Timeline: A chronological, exhaustive visual history of recent clicks and file edits.
Generated Summary (TaCoS): The automated AI synthesis of the user's overarching intent and completed steps.

The three study conditions for helping users reestablish their context after an interruption: the user’s own notes, a full history of what the user had been doing, and an AI-generated summary. (Nano Banana 2)
Empirical Findings: Synthesis Beats Raw Data
The results were a resounding victory for automated summarization.
The AI-generated summary produced the shortest resumption lag (the time it takes a user to make their first navigation action), dropping by 35% from 69 seconds for notes to 45 seconds for AI. AI-generated resumption summaries were also overwhelmingly preferred, with 27 out of 32 users choosing the AI summary over their own handwritten notes.

Users strongly preferred AI-generated summaries to their own notes, and they performed better when relying on AI. However, these users were all nerds, so it remains to be seen whether the findings will generalize to other knowledge workers. I suspect that the performance edge for AI will remain, but that the subjective preference may not be as strong. (NotebookLM)
Decent, but less impressive, results were recorded for edit lag (the time to resume actual work), which was 186 seconds in the manual condition and 172 secs when using AI-generated summaries, for an 8% improvement. The number of successfully completed subtasks was 1.25 in the manual condition and 1.35 for AI-generated summaries, which was also 8% better.
Why was AI better than the users’ own notes? Because human beings are notoriously unreliable at documenting their own mental state during an unexpected interruption. The AI summary provided the “why” (inferred intent) alongside the “what” (clickable links to relevant files) with zero manual effort.

People don’t know what they will need later, so their notes are not as useful as those automatically generated by AI. (Nano Banana 2)
Interestingly, while the raw timeline of edits technically produced a slightly higher task success rate (1.56), users complained it was visually overwhelming. As I have advised for decades, presenting raw system logs is not a user interface. Users need synthesized meaning, not a data dump. Assuming that giving the user all the data is the same as giving them the right data is a classic usability failure.

The pure data dump of everything the computer knew was overwhelming. (Nano Banana 2)
However, the study revealed one critical limitation of current AI: while the LLM was excellent at retrospective summarization (what happened), it failed at prospective planning (what to do next). The self-authored manual notes almost always contained the immediate next step. The developers relied on the AI to remember the past, but they needed their own brains to chart the future.
Generalizing to Slow AI
These findings generalize far beyond software development; they provide empirical validation for the necessity of Context Reboarding, a concept I introduced in my recent article on Slow AI.
As AI agents begin executing complex tasks that take hours or days to complete, we are effectively reviving the batch-processing paradigm of the 1960s. Frequent turn-taking vanishes. When a 20-hour AI task finally finishes, your mental context has completely decayed. You cannot be expected to remember the exact nuances of a prompt you wrote yesterday.
If your product features long-running AI tasks, you must design a Return Recap (or Resumption Summary). You cannot simply flash a generic “Task Complete” toast notification. The UI must explicitly welcome the user back by stating:
Original Intent: This is what you asked me to do.
Conceptual Breadcrumbs: Here are the key decisions I made while you were gone.
Current Status: Here is the outcome and the exact artifacts produced.
The Future of AI UX: Hybrid Interfaces
The ultimate UX solution for the future of AI is a hybrid design: an AI-generated summary of past actions and intent, paired with a lightweight, user-authored “next step” input field. Let the computer do the tedious work of tracking history, while empowering the human to do the high-level strategic planning.
We must stop pretending that users possess infinite working memory. The TaCoS experiment proves that automatic, intent-driven summaries drastically reduce the cognitive friction of task resumption. Whether a user is interrupted by a ringing phone or by a 24-hour AI processing cycle, the UX mandate remains the same: do not make the user remember. Place the memory burden on computers. They are superior to our poor brains at keeping information from decaying over time.
AI Agents = 15% of Website Traffic
The way people find information online is increasingly mediated by AI agents rather than direct human queries. New data from SEO vendor BrightEdge shows how profound this shift has become. According to the company, AI agent requests, largely coming from systems like ChatGPT, Perplexity, Gemini, and Claude, have reached 88 percent of human organic search volume. These agents already generate about 15 percent of total website traffic as of April 2026. If growth continues, AI agent requests will surpass human search activity later this year.
Why does this matter? AI agents are not merely gathering information; they are making decisions on behalf of users. BrightEdge warns that at key decision‑making moments, such as choosing a product or service, the content that agents retrieve and interpret may determine which brands succeed. Yet most companies are unprepared. Only 19 percent have specific directives for AI bots, while 81 percent treat them like ordinary crawlers. Traditional robots.txt files may inadvertently block AI training agents or allow retrieval agents to misrepresent content, leading to missed opportunities or misaligned brand messaging.

AI accounts for an ever-growing share of website traffic, meaning that content strategists need to adapt to the new GEO realities. However, don’t forget the human users for now. (Nano Banana 2)
It’s time to develop AI agent strategies. This involves deciding which agents to welcome, what content they may access, and how to structure data for retrieval and training. From a UX perspective, designers should consider how AI agents summarize or rephrase content. Additionally, transparency about AI agent traffic is essential; companies need analytics tools that distinguish between human and agent visits to understand how their content is being used.
The findings also highlight the growing importance of Generative Engine Optimization (GEO), the practice of optimizing content for conversational AI. Unlike traditional SEO, GEO prioritizes concise, unambiguous facts and context that AI can easily parse. For designers and content strategists, this means rethinking information architecture, ensuring that key facts are prominent and that language is clear. Failure to adapt could mean that AI agents will simply ignore a company’s content, leaving users with incomplete or incorrect information.
BrightEdge’s report signals a shift in digital discovery. UX professionals must recognize that their audience is shifting from humans to AI agents. During this transition period, when humans still visit websites, designing websites and content that serve both human visitors and AI agents is critical for maintaining visibility and trust in an AI‑mediated world.
Headless Software = Agent-First Design = No UI
As I have been saying for a year, we’re heading for a world without user interfaces in the traditional sense. It is likely that humans will only interact with their AI agents, and the agents will then handle interactions with online services (formerly known as websites) and applications on behalf of the users. The previous news item, showing that AI agents already almost equal traditional search traffic to websites, demonstrates that this trend is well underway.

The AI handles all the interactions that previously required separate user interfaces for each website. (Nano Banana 2)
Yes, the AI agents will need UX design, because they do have a user interface, so it’s not literally true that there will be zero UI. However, AX > UX, to the extent that the agent experience becomes vastly more important than the little UX that remains.

Agents-First is becoming the design goal for many new projects. (Nano Banana 2)
Aaron Levie (head of cloud storage provider Box) wrote an interesting article about trends in AI agent use, summarizing meetings he has had with IT leaders in large enterprises across banking, media, retail, healthcare, consulting, tech, and sports, to discuss agents in the enterprise. Note that these are very conservative industries. (I have found Levie to be one of the most insightful commentators on where AI and UX are going, probably because — besides clearly being very smart — he has the finger on the pulse as both leading an AI-First company and meeting frequently with a wide range of people trying to implement AI in their companies. This guy doesn’t just sit in his office.)
Levie reported that “Headless software dominated my conversations. Enterprises need to be able to ensure all of their software works across any set of agents they choose. They will kick out vendors that don’t make this technically or economically easy.” (“Headless” software runs without a built‑in graphical user interface and typically exposes its capabilities via APIs or other non-visual interfaces, so any number of separate “heads” [frontends] can sit on top of it.)

Computer geeks are notoriously bad at naming, as we’ve seen with AI model names ever since ChatGPT. (Nano Banana 2)

Being “headless” means that software can be fully controlled without a screen (the “head,” or terminal in old-school IT systems).
It’s not simply the technical ability to use software without a GUI. That API interface must be easy for the agents to use: AX!

Companies are starting to reject software vendors that don’t make their software “headless,” so that it can be easily operated by an AI agent. (Nano Banana 2)
What’s the balance between UX and AX? We will soon get to the point of Agent-First Design, similar to the “Mobile-First Design” that was all the rage 10 years ago. For now, this doesn’t mean “no humans,” any more than the mobile-first era meant websites could ignore desktop visitors. It’s more the question of what’s more important and where you lavish the most resources, as the balance gradually shifts. It’s probably at least 10 years until we get to a true “No UI” situation, where companies only need to design for AI agents to access their software and websites.

Think normal people enjoy using websites? Think again! Most people will be happy if they never have to visit an online banking site ever again because their agent deals with the bank on their behalf. (Nano Banana 2)
While many people have wised up over the last year to the need to design for agents, Levie made a second observation in his article that I don’t see mentioned nearly as often: “Despite Silicon Valley’s sense that AI has made hard things easy, the most powerful ways to use agents is more “technical” than prior eras of software. Skills, MCP, CLIs, etc., may be simple concepts for tech, but in the real world, these are all esoteric concepts that will require technical people to help bring to life in the enterprise.”
A call for more usability, if I ever heard one. (Levie thinks that the solution is to have more technical people to explain AI to the users. I think the solution is to make AI agents easier.)

AI agents are currently too difficult for many regular businesspeople to set up. We need better agent usability. (Nano Banana 2)
Profitable business use of AI requires new workflows designed for this technology, which again means that domain experts across the business must feel comfortable setting up the agents.

We’re not quite at the point yet where Agent Experience (AX) actually is more important than UX every time, but the balance is tipping, so this scenario will be true in more companies every year. (Nano Banana 2)
Suno Sale
Suno is currently the best AI model for making songs. My latest creation: Hamlet, the Music Video (YouTube, 4 min.), which features really neat animations made with Seedance 2.0.
Suno is offering 20% off full-year subscription plans, with the sale ending tomorrow, Tuesday, April 21, at 9 PM USA Eastern Time.
(This is not a sponsored post: I am a satisfied, paying customer, so I’m the one giving Suno money. They don’t give me anything.)
However, despite liking Suno, I recommend against this offer unless you are a huge music creator. Annual plans should be avoided for most AI services because the likelihood is low that any given service will remain the best for a full year. This is less of a problem for the Big-3 foundation models (Google, OpenAI, and Anthropic), as well as the two contenders that may make it to a potential Big-5 list later this year: xAI and Meta. Even though the big AI models also alternate every few months on who’s ahead, the difference remains marginal. Let’s say that OpenAI’s rumored “Spud” model takes the lead in April. That lead may only last a few weeks until Google’s rumored release in May. Unless you have a use case where the last few percent in AI capability will make or break your product, then just pick one model and stick with it.
However, for more narrowly targeted AI models, such as music, image, or video generation, the differences can be larger and not close as quickly after one model pulls ahead of the pack. You don’t want to be stuck having prepaid for the laggard for another half year.

My advice on AI subscriptions. (Nano Banana Pro)
Gemini 3.1 Pro Time Horizon = 6 Hours, 24 Minutes
METR has released its measurements of the T50 time horizon for Google’s Gemini 3.1 Pro with thinking level “high.” This model scores a time horizon of 6 hours 24 minutes, which is 12% better than GPT 5.4 (running at xhigh reasoning) which has a time horizon of 5 hours and 42 minutes.
Claude Opus 4.6 still holds the gold medal as the frontier with the longest T50 time horizon, of 11 hours 59 minutes, which is a whopping 87% better than Gemini 3.1 Pro. When you look at the chart at the MERT page linked above, you can see how Opus 4.6 is a true outlier in the current Big-3 model race toward superintelligence.
Of course, both OpenAI and Google are likely to have a comeback with their next models.
Gemini 3.0 Pro has a T50 of 3 hours 44 minutes, meaning that the leap from 3.0 to 3.1 was 71%. An immense gain for Google in only 3 months. (3.0 was released November 2025 and 3.1 was a February 2026 release.)
Google is heavily rumored in Silicon Valley to have its next model cooking for a May 2026 release, which would represent another 3 months of AI advances. If Google can bag another 71% improvement, they will virtually close the gap with Opus 4.6.

Google gains, but Claude is still King of time horizons, which is why most developers use it for their long-running agentic coding tasks. (Nano Banana 2)
Google actually needs to beat Opus 4.7, not 4.6. Unfortunately, we don’t know yet how good this model (launched April 16) is on the METR time horizon metric, and it’ll probably take a month or more for METR to complete its research. 4.7 outscores 4.6 on the standard AI industry benchmarks, but those benchmarks have less and less credibility as they get saturated and as AI labs train models specifically to beat the benchmarks.
Opus 4.6 was an outlier, relative to the “standard” exponential improvement of AI models over time for METR time horizons. I have two opposite theories for what this means for the Opus 4.7 score: One possibility is that the “miracle” that bumped 4.6 up so much was a one-off and we’ll see a reversal to the norm with smaller improvements in subsequent versions (such as 4.7), until Claude is back on the trend line. The other possibility is that Anthropic has cracked recursive self-improvement and that the new norm is ever bigger gains. In this second case, the old trendline becomes irrelevant and we’ll need to establish a new trend for the speed of RSI. The trillion-dollar question is obviously whether Anthropic will be the only lab to realize benefits from RSI, or whether they were just the first to release models within this paradigm. The history of AI suggests that the other labs will discover the same magic sauce and start accelerating on that same new trendline. The next half year or so will show whether everybody gets on the same old trendline or whether everybody move to a new trendline. (Or as a distinctly unlike third possibility, that Anthropic accelerates on a new line and the other labs stay on the old line.)

New, faster trendline for improving AI time horizons? Or will everybody revert to the mean predicted by the old trendline? A single data point clearly can’t differentiate between these two scenarios, so we will have to wait to find out. (Nano Banana 2)
As a reminder: the T50 time horizon is the time it would take a human expert to perform a task that the AI can do with 50% accuracy. In this case, Gemini probably executes most of these tasks in less than an hour, but the “time horizon” is measured as the human equivalent.
“You Are Not Talking to Somebody Who Woke Up a Loser”
During a recent appearance on the Dwarkesh Podcast, Nvidia CEO Jensen Huang delivered a brutally honest, meme-worthy statement regarding his competitive drive: “You are not talking to somebody who woke up a loser.”

My Hero: Jensen Huang may have delivered the defining quote of the modern era: I love his optimistic, can-do attitude. (Nano Banana 2)
Every UX professional needs to print this quote and tape it to their monitor. Lately, observing the self-pitying discourse in the design community, it feels like an entire industry woke up feeling exactly that way. You all need to stop thinking as losers about AI and how it will take all the jobs and eliminate UX design.
This pervasive doom-loop is fundamentally backwards. Rather than an apocalyptic threat to our livelihoods, AI is the greatest opportunity for the last two hundred years. The Industrial Revolution produced roughly a 50-fold increase in average living standards across 200 years. AI will compress a comparable transformation into 20 years, perhaps less. When humanity gets dramatically richer, what do rich people demand? Higher quality. Better experiences. Beautifully crafted products. Thoughtfully designed services. The history of affluence is the history of escalating design expectations. Peasants tolerated mud floors; their descendants commissioned Chippendale furniture. Today’s users tolerate text-box interfaces; tomorrow’s will demand AI experiences that make current design look medieval.

Stop with the negative thinking, already. (Nano Banana 2)

AI will make us all unfathomably rich. Just as we already live like kings compared to our ancestors who worked the fields from dawn to dusk. And the rich don't tolerate friction. They demand elegance, so they will turn away from bad UX. (Nano Banana 2)
And speaking of that text box: have we collectively lost our minds? The dominant interface paradigm of the most transformative technology since electricity is a chat window. A chat window! Usability testing shows current AI tools scoring abysmally against every heuristic that matters. Users cannot discover capabilities. They cannot recover from errors. They have no mental model of what the system can or cannot do. Feedback is inconsistent. Consistency is nonexistent. Visibility of system status is a running joke. This is not a triumph of AI; it is an indictment of AI UX. Someone will redesign these interfaces properly, and that someone will capture enormous value. Why not you? Did you wake up a loser?

The rich possibilities of AI are currently trapped inside a text box. We’re in the extremely early stage of AI interaction design. (Nano Banana 2)
Capturing the vast economic value of AI will not happen automatically. It necessitates immediate action across two massive frontiers, requiring extreme UX investments both in better UI for all these AI tools and in redesigning all enterprise workflows in all companies, using service design methodology to make them optimal for profiting from integrated agentic AI.

Virtually every single enterprise workflow will have to be redesigned for optimal AI integration. (Nano Banana 2)
Let us address the tools first. By relying on prompt engineering, the tech industry has inexplicably retreated to a conversational paradigm that is effectively a dressed-up command-line interface from the 1960s. This severely violates our most fundamental usability heuristic: recognition over recall. Expecting mainstream users to guess hidden system capabilities and articulate perfectly structured text prompts is a catastrophic usability failure. It creates a severe articulation barrier, shifting the cognitive burden from the computer back onto the user. We must move beyond the lazy, blinking cursor to invent rich, discoverable graphical user interfaces that actually guide human-computer interaction.

The dominant UI for AI is a time warp bringing back the bad old days. (Nano Banana 2)

Usability heuristic number 6, recognition over recall, is brutally violated by most current AI user interfaces. (Nano Banana 2)
Second, the true economic payoff extends far beyond individual consumer applications. You cannot simply drop a smart agent into a broken, legacy business process and expect a positive return on investment. The introduction of autonomous capabilities requires us to dismantle and completely rebuild the systematic way corporate work is conducted.
UX practitioners must step up to map out these complex, cross-functional processes from end to end. We are tasked with designing entirely new corporate ecosystems where human workers and autonomous agents operate in tandem. Crafting these intricate hybrid workflows to determine precisely who does what, and how exceptions are gracefully handled by human overseers, will require millions of hours of skilled, rigorous design work to actualize the promised productivity gains.

Stop whining about AI’s downsides, start fixing them. (Nano Banana 2)
The AI revolution is not the end of the UX profession; it is the ultimate full-employment act for designers. We have a world to fix. Stop whining about artificial intelligence, recognize the disastrous usability of today’s AI, and embrace the monumental challenge ahead of us. Stop waking up a loser.

Redesigning the way everything works is the opportunity of a lifetime. (Nano Banana 2)

Jensen Huang made many interesting points during his appearance on the Dwarkesh Podcast. Give the full 103-minute episode a listen. My apologies to Dwarkesh Patel for Nano Banana 2 drawing him without his trademark beard.



