How AI Works is a quick read that covers some of the modern history of AI from someone who's been in the industry and seen it transition over the last few decades. He has an interesting insight and take on the modern LLMs like Chat GPT and Bard on how intelligent they are. His take is that no they're not alive in the sense that other life is, but that during the learning process they've acquired an understanding of topics that seems deeper than just a next token / sentiment analysis. His thought being we need to research this kind of AI more.

My main takeaways was a better understanding of the different modern AI models, their pros and cons and why they work for their associated problems. I've used Convolutional Neural Networks for image analysis for example, but didn't appreciate the reason it works is that it helps retain the structure of structured data. There was an interesting illustration the author made to highlight that in a general Deep Neural Network the same image can have all the pixels fragmented and the AI would have a same prediction over whether the image is a number as the number unscrambled.

The author also shares alittle love for a different approach to AI based on symbolism instead of connections, which isn't something I'd come across before but then again he doesn't linger long on the subject as it's not where modern AI is currently at.