Simon Quigley: AI and what it actually means

A popular topic of public conversation in 2025 is balance. How do we balance budgets, how do we balance entities, and how do we balance perspectives? How do we balance the right of free expression with our ability to effectively convey a message?

Here’s another popular topic of conversation… AI. What is it? What does it do?

I’m going to give you some resources, as someone who first learned the inner workings of AI about ten years ago.

I’ll start with the presentation I gave in middle school. Our objective was to give a presentation on a topic of our choice, and we would be graded on our ability to give a presentation. Instead of talking about specific things or events, I talked about the broader idea of fully establishing an artificial form of intelligence.

This is the video I used as a basis for that presentation:

https://medium.com/media/21d2427a502b7c7cb669220e2e3478c8/href

Not only did I explain exactly how this specific video game worked, it helped me understand machine learning and genetic algorithms. If I’m recalling correctly, the actual title of my presentation had to do with genetic algorithms specifically.

In the presentation, I specifically tied in Darwin’s readings on evolution (of course, I had to keep it secular…), directly relating the information I learned about evolution in science class into a presentation about what would become AI.

“But Simon, the title of that video says Machine Learning. Do you have your glasses on?!?”

Yes, yes I do. It took me a few years to watch this space evolve, as I focused on other portions of the open source world. This changed when I attended SCaLE 21x. At that conference, the product manager for AI at Canonical (apologies if I’m misquoting your exact title) gave a presentation on how she sees this space evolving. It’s a “must watch,” in my opinion:

https://medium.com/media/a13b2e46fc8acaa3bebf01a7f7bdeebb/href

This comprehensive presentation really covers the entire space, and does an excellent job at giving the whole picture.

The short of it is this… calling everything “AI” is inaccurate. Using AI for everything under the sun also isn’t accurate. Speaking of the sun, it will get us if we don’t find a sustainable way to get all that energy we’ll need.

I also read a paper on this issue, which I believe ties it together nicely. Published in June 2024, it’s titled Situational Awareness — The Decade Ahead and does an excellent job in predicting how this space will evolve. So far, it’s been very accurate.

The reason I’m explaining this is fairly simple. In 2025, I still don’t think many people have taken the time to dig into the content. From many conversations I’ve heard, including one I took notes on in an entirely personal capacity, I’m finding that not many people have a decent idea for where this space is going.

It’s been researched! :)

If someone can provide a dissent for this view of the artificial intelligence space in the comments, I’d be more than happy to hear it. Here’s where I think this connects to the average person…

Many of the open source companies right now, without naming names, are focusing too much on the corporate benefits of AI. Yes, AI will be an incredibly useful tool for large organizations, and it will have a great benefit for how we conduct business over the course of the next decade. But do we have enough balance?

Before you go all-in on AI, just do your research, please. Take a look at your options, and choose one that is correctly calibrated with the space as you see it.

Lastly, when I talk about AI, I always bring up Orwell. I’m a very firm, strong believer in free speech. AI must not be used to censor content, and the people who design your AI stack are very important. Look at which one of the options, as a company, enforces a diversity policy that is consistent with your values. The values of that company will carry over into its product. If you think I’m wrong about this point, seriously, go read 1984 by George Orwell a few times over. You’ll get the picture on what we’re looking to avoid.

In short, there’s no need to over-complicate AI to those who don’t understand it. Use the video game example. It’s simple, and it works. Try using that same sentiment in your messaging, too. Appealing to both companies and individual users, together, should be important for open source companies, especially those with a large user base.

I wish you all well. If you’re getting to the end of this post and you’re mad at me, sorry about that. Go re-read 1984 just one more time, please. ;)



from Planet Ubuntu https://ift.tt/VQjtkvp
Simon Quigley: AI and what it actually means Simon Quigley: AI and what it actually means Reviewed by Hackers Spot on 01:42 Rating: 5

No comments:

Powered by Blogger.