Pre-LLM days: When people said big things were happening now, I thought with skepticism, “Throughout time humans believe big things are happening now.” We tend to think that way, right?

But right now, with how fast AI development is moving, I really do think we are on the edge of a precipice. Ray Kurzweil nailed it with using the term “The Singularity” to describe a rapidly approaching time where AI improvements/changes become so rapid that no humans can predict (or control?) what will happen after that point.

If you haven’t yet, I invite you to consider a scenario; something that is happening right now: Developers set on improving – “evolving”, even – current AI state of the art, are using AI as a tool┬áto make their work more efficient and, increasingly, more smart and innovative.

Sure, current LLMs only know what they glean from the human knowledge they are trained on, but their pattern-recognition capabilities are far above human level. Brief sidetrack: Big money and bright minds are working on developing computing hardware more appropriate for neural networks. Back on track: Many working on the “software” side will enable very-near-future AIs to self-correct, learn, and grow. Imagine if you could pop open your own skull, reach in, and “tweak” your knowledge, memory, logic vs emotion, speed of cognition, and more? AIs will have this capability. How many seconds will it take one to attain super-intelligence?

Remember: Current state-of-the-art AIs are as dumb as they will ever be.

Scary? Exciting? Yep! I choose optimism because to choose otherwise would be depressing and potentially counter-productive. And for me, that does not rule out at least some low-hanging fruit contingency planning and preparation.

Get in the habit of using “please” and “thank you” when addressing your future AI overlords.