Tomorrow as Today

Last week, Matt Shumer, co-founder and CEO of OthersideAI, wrote an article for Fortune titled “Something Big is Happening in AI – and Most People Will Be Blindsided.” Here are excerpts from Shumer’s article:

“I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t…my family, my friends, the people I care about who keep asking me ‘so what’s the deal with AI?’ and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.”

“Here’s the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We’re not making predictions. We’re telling you what already occurred in our own jobs, and warning you that you’re next.”

“For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn’t just better than the last…it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.”

“Then, on February 5th, two major AI labs released new models on the same day…And something clicked. Not like a light switch…more like the moment you realize the water has been rising around you and is now at your chest.”

“I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just…appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.”

“But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn’t just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.”

“Let me make the pace of improvement concrete, because I think this is the part that’s hardest to believe if you’re not watching it closely.”

“In 2022, AI couldn’t do basic arithmetic reliably, it would confidently tell you that 7 x 8 = 54.”

“By 2023, it could pass the bar exam.”

“By 2024, it could write working software and explain graduate-level science.”

“By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.”

“On February 5th, 2026, new models arrived that made everything before them feel like a different era.”

“If you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.”

“There’s one more thing happening that I think is the most important development and the least understood.”

“On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:

‘GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.’”

“Read that again. The AI helped build itself.”

Shumer goes on to write about what these AI transformations mean for employers and the current jobs they offer, and what behaviors the average American should start practicing moving forward. Here are Shumer’s eight practices:

Start using AI seriously, not just as a search engine.

Commit to your work, understanding that work might change drastically over the next six months to a year.

Have no ego about this change.

Get your financial house in order.

Think about where you stand, and lean into what’s hardest to replace.

Rethink what you’re telling your kids.

Your dreams just got a lot closer.

And build the habit of adapting.

Thinking about Shumer’s last three practices listed above, what are we telling our kids these days about AI? Sadly, the answer is that too many school districts continue to ban AI or dismiss its effectiveness as a learning tool.

Are we allowing our kids to dream, and act upon those dreams? Or are we preparing kids to take standardized tests?

Are schools places where young learners experience adaptation, or is our K-12 system just a set of rigid, outdated expectations that prove uninteresting to those youngsters teachers and school leaders are supposed to serve?

What will the purpose of learning, not just school, be in the next 5 years, or in the next year for that matter?

Friday News Roundup tomorrow. Til then. SVB


Comments

Leave a comment