Let’s Go Mano a Mano With AI
How Spaghetti Eating Will Smith Inspired My 2026 New Year’s Resolutions
Goodbye 2025.
Hello 2026.
I started reminiscing that as a teen growing up in the 90s, I used to love the old Conan O’Brien sketches where he would slap on a black robe and make wild predictions about what would happen in the year 2000. The year 2000! That seemed so far into the future at the time, and now that was more than 2 decades ago!
Remember Y2K?! All of our computers were supposed to blow up while airplanes fell from the sky. Yeah, that was 25 years ago!! Y2K clearly didn’t pan out the way we predicted, but computers and AI are certainly front and center in our lives right now. Something that does feel like it definitely could have been part of Conan’s predictions is the official/unofficial benchmark being used to gauge generative AI’s progress; the Will Smith spaghetti test.
In 2023, an AI-generated video was created of Will Smith plowing through a plate of spaghetti. It is definitely odd and somewhat unsettling. For whatever reason, that prompt is the one that is regularly being regenerated and used to show off how well AI is improving. Recent versions of the clip certainly are more impressive and look a lot more like a real Will Smith eating a real bowl of spaghetti. Yet, something continues to feel…off.
Generative AI continues to improve at an astonishing and scary rate, yet there are still a few giveaways that remind us an image, video, or essay was AI-created. An overuse of certain punctuation marks (particularly dashes) or music that still doesn’t quite follow typical lyrical or rhythmic patterns are common AI indicators. However, there is one element that still proves very difficult for AI to create: human hands. AI-generated hands frequently have added fingers, impossible joints, or they just blend into a blur completely. Even on the recent Will Smith tests, his hands eerily mix into the spaghetti and look quite rubbery.

I find it fascinating and significant that AI struggles with generating human hands. As I’m writing this, there are fields of data servers burning an insane amount of energy to perfect human hands generation and an equally insane amount of influencers, developers, and other people eagerly waiting and ready to generate and post the next Will Smith spaghetti video to show off just how amazing, crazy, or realistic things are looking now. Yet, I’m worried that in the hunt for perfectly generated AI hands, we are rapidly losing the beauty that comes from imperfect creations that our real hands are able to develop.
You might enjoy the documentary “Sound City,” which offers a fascinating look at the iconic studio that once recorded legendary albums by bands like Fleetwood Mac, Beastie Boys, and Heart. Nirvana’s Nevermind album was also recorded there. The studio was famous for its large analog deck and recording space, which captured unique tones and sounds. As music production moved more towards digital tools, studios like Sound City became less common and started to close. Dave Grohl, the drummer for Nirvana and frontman for Foo Fighters, loved the human touch of tape recording. The documentary follows his journey to buy the Sound City deck to keep it alive and use it to record new albums. He believes that the small imperfections in that style of recording remind us that a real person was on the other end of the drumsticks.
It’s tricky, right? Digital generation tools allow an 8-year-old to crack open GarageBand on an iPad, splice a couple of tracks together, and let the AI master and equalize a pretty solid-sounding tune. With a few quick prompts and 30 seconds of spare time, anybody can plop down on the couch and produce a convincing 2-minute animated video. Is that really the worst thing ever? Isn’t it great that we have access to tools and methods to easily create so much content? I suppose I’m just worried that we’re allowing digital assistants to be such a great part of that creation, that we’re starting to lose the power of the process along the way.
As I look ahead to 2026, I’m struck by the idea that there are still some things that only human hands can do, things that AI just can’t replicate. Will I focus on helping AI improve human hands by feeding it data, or will I use the truly physical hands I already have more intentionally? The human experience is all about its unique imperfections. I much prefer seeing a brushstroke that’s a little off, hearing a singer’s voice that cracks for a moment, or getting a text that feels like it was truly written by the sender themselves, rather than being overwhelmed by the flawless perfection that deep AI integration promises.
So, my New Year’s resolution is pretty straightforward: I’m ready to get creative, serve others, work hard, and connect with people in ways that go beyond just tapping away on a screen. I dare AI to try and keep up! These hands of mine are unique and impossible to copy.

