Duncan Anderson·Feb 5, 2025Google Gemini 2.0 brings with it lower costsGoogle's Gemini 2.0 model launches bring better models at lower costs - what's not to like?#AI-Models
Duncan Anderson·Jan 28, 2025DeepSeek = DeepDisruption?DeekSeek R1 is an open source model from China that was trained with a small fraction of the compute used to train OpenAI's o1, and yet has comparable performance.#AI-Models
Duncan Anderson·Jan 20, 2025Me and an AI: Building a New Website TogetherOur new website was built by Claude and o1. A human (me) oversaw its construction, but AI wrote every line of code.#AI-development
Duncan Anderson·Jan 15, 2025Solving tricky problems...GenAI success reaquires expertise + space + creativity + trust#Philosophy
Duncan Anderson·Jan 14, 2025AI-ParkyAre podcasts hosted by AI reincarnations of dead people interesting, creepy or dangerous?#Voice
Featured PostDuncan Anderson·Jan 11, 2025Will AI Replace Human Engineers in 2025?Mark Zuckerberg seems to thinks so... but what's the truth?#Societal-Impact#AI-powered-Engineering
Duncan Anderson·Jan 11, 2025AI Aides Scientific DiscoveryAI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies#AI-Use-Cases#Science
Duncan Anderson·Jan 8, 2025A home supercomputerNvidia's Digits can run 200B parameter LLMs at home.#AI-Infrastructure
Duncan Anderson·Jan 7, 2025Nvidia's Jensen Huang & AI AgentsNvidia's CES keynote included some fascinating discussions about AI Agents - are they really ready to be deployed like an HR department deploys people?#AI-Agents
Duncan Anderson·Jan 3, 2025Phi4 is out!Microsoft Phi4 LLM is out on Ollama! This is a small open source model that gets really good benchmark scores - on the MMLU benchmark phi4 scores 84.8, nipping at the heals of GPT-4o's 88.1.#LLMs#Models
Featured PostDuncan Anderson·Jan 1, 2025The wall that wasn’tBenchmark results for the latest AI models suggest that any “scaling wall” has already been breached and we’re on the path to AGI.#AGI
Duncan Anderson·Feb 3, 2024Beyond Data Hoovering: The Nuanced Reality of Training Large Language Models (LLMs)Training Large Language Models (LLMs) is an evolving science — or, perhaps, an art form. In this post I set out to shed some light on exactly what is meant by training a model.#AI-Training