Uday Deb
We started this year by discussing the various capabilities of Generative AI tools. We said there are capabilities for documents, audio, video, images, data, and code interpretation and generation. When we leverage any of these capabilities to satisfy a business requirement, it becomes an industry use case.
We also noted that Large Language Models are considered to have emergent properties. They are built from a system of artificial deep neural networks. Such systems contain billions of parameters (weights and biases) organised into approximately 100 layers, each comprising a vast set of neurons. The sheer size of such networks brings complexity and emergence.
We discussed multi-agent frameworks that run on top of LLMs. Agents make decisions on behalf of the user and carry out tasks autonomously. Pitfalls are there. Human-in-the-loop is also there to avoid pitfalls. We spoke about jurying the Smart India Hackathon and about 20 possible questions to help evaluate the submitted applications. One week, we discussed numerous interview questions around traditional AI. Generative AI is not an alternative to traditional AI. In a strict sense, Generative AI is traditional AI.
Then, we noted the various steps needed to execute a data science project: converting the business problem statement into a data science problem statement, data identification and collection, data preparation, exploratory data analysis, ML model building, deployment, integration, and monitoring. The following article was about the dangers of tasks created and assigned through hallucination. How would you avoid that?
A coder’s job has transformed from coding to designing and prompting. We went through SLMs, which are the models of choice when we want to run them on consumer hardware or edge devices for inferencing. The smaller size of the SLM means that hosting and inference resource requirements and costs come down. Next, we celebrated LLMs as the Chitrakar.
We were impressed with DeepSeek-R1’s reasoning prowess and how it shook the industry. Next up was the fact that LLMs are not all-saintly. The rise of LLMs is a double-edged sword, offering immense potential while exposing profound societal challenges. Bridging the digital divide and ensuring equitable access must be prioritised to prevent further marginalisation.
Collaboration across nations, industries, and communities is essential to balancing innovation with inclusivity and fairness. It is not enough to be born as humans. We must also ensure that we are human by the time we leave.
We discussed about an approach to building a Large Life Model (LLM). Using the life model, an individual’s future could be simulated. Once we realise that the simulation leads to an unfavourable future, we can “intervene” and change the course of our lives. We can change it towards any outcome that the individual desires. We also talked about AI Pair Programming and what future products might look like. We identified 21 trends in the industry dominated by Generative AI.
We discussed vibe coding. It is not LLMs vs. humans. It is LLMs and humans. LLMs are not replacements but collaborators, reshaping developers into conductors of code in an orchestra. AIML is an excellent weapon in our arsenal to fight volatility, uncertainty, complexity, and ambiguity (VUCA). We must have a VUCA score for every AIML solution we develop. The score must tell us about the percentage by which the solution reduces the world’s VUCA-ism.
We included multi-modal LLMs in our subsequent discussion. We highlighted mistakes we make in selling vibe coding tools. We spoke about the confusion matrix and the gains table, which are evaluation metrics of classification ML models. How do we update vectors stored in a vector database?
What is the difference between agentic AI and AI agents? We discussed concepts related to LLMs’ context windows. What things must one take care of to convert a POC-grade LLM-based application to a production-grade one? We spoke about digital minimalism, after-call work, quantum computing, ANNs, NoSQL databases, and improving RAG-based applications.
This year’s exploration illuminated Generative AI’s transformative scope (from foundational LLM architectures to practical applications spanning multi-agent systems, data science workflows, and human-AI collaboration). As we progress, balancing innovation with ethical responsibility, ensuring equitable access, and measuring VUCA reduction remain paramount. The future belongs to those who master the collaborative power of humans and AI.
I hope you had an enjoyable year of reading.
Views expressed above are the author’s own.
It’s August, and I’m standing in the colossal halls of Gamescom in Cologne. The largest…
Indian Prime Minister Narendra Modi and U.S. President Donald Trump. File | Photo Credit: Reuters…
Thai border police have apologised for publishing an AI-modified image showing flood relief rescuers in…
NEW DELHI: A Trinamool Congress (TMC) delegation met the Chief Election Commissioner Gyanesh Kumar on…
Every now and then a theory wanders out of the savannahs of LinkedIn, a place…
Every now and then a theory wanders out of the savannahs of LinkedIn, a place…