Thursday, November 13, 2025

25th Anniversary of the World Wide Web

Meeting Tim Berners-Lee at SXSW #IEEE Event
On August 6th 1991 when Tim Berners-Lee sent a message to a public list announcing the WWW project.  Another world disrupting event was taking place in the same month, the August 1991 Soviet Coup. I was on a holiday in India when the coup happened and heard the news from my friend M A Deviah who then worked for the Indian Express in Bangalore.

The Tim Berners-Lee announcement of the World Wide Web I recall did not make the news. In 1991 my exposure to computers was :

Zero-Access Cloud AI: How Google Built a System Even They Can't See Into

Google Private AI Compute: Understanding the Architecture and Business Value
Google has introduced Private AI Compute, a new approach to cloud-based AI processing that promises enterprise-grade security while leveraging powerful cloud models. In their recent blog post "Private AI Compute: our next step in building private and helpful AI," the Google team outlines how this technology works and what it means for the future of private AI computing.

What is Private AI Compute?

Private AI Compute represents Google's solution to a fundamental challenge in AI:

 how to deliver the computational power of advanced cloud models while maintaining the privacy guarantees typically associated with on-device processing. As AI capabilities evolve to handle more complex reasoning and proactive assistance, on-device processing alone often lacks the necessary computational resources.

The technology creates what Google describes as a "secure, fortified space" in the cloud that processes sensitive data with an additional layer of security beyond Google's existing AI safeguards.

Chip-Level Security Architecture

The system runs on Google's custom Tensor Processing Units (TPUs) with Titanium Intelligence Enclaves (TIE) integrated directly into the hardware architecture. This design embeds security at the silicon level, creating a hardware-secured sealed cloud environment that processes data within a specialized, protected space.

The architecture uses remote attestation and encryption to establish secure connections between user devices and these hardware-protected enclaves, ensuring that the computing environment itself is verifiable and tamper-resistant.

No Access to Provider (Including Google)

According to Google's announcement, "sensitive data processed by Private AI Compute remains accessible only to you and no one else, not even Google." The system uses remote attestation and encryption to create a boundary where personal information and user insights are isolated within the trusted computing environment.

This represents a significant departure from traditional cloud AI processing, where the service provider typically has some level of access to data being processed.

 Information Encryption

 Private AI Compute employs encryption alongside remote attestation to connect devices to the hardware-secured cloud environment. This ensures that data in transit and during processing remains protected within the specialized space created by Titanium Intelligence Enclaves.

 Same Level of Security as On-Premises?

Google positions Private AI Compute as delivering "the same security and privacy assurances you expect from on-device processing" while providing cloud-scale computational power. 

For businesses evaluating this against on-premises deployments, the comparison is nuanced. Private AI Compute offers:
- Hardware-based security through custom silicon and enclaves
- Zero-access architecture (even from Google)
- Integration with Google's Secure AI Framework and AI Principles

However, it's important to note that this is fundamentally a cloud service, not an on-premises deployment. Organizations with strict data residency requirements or those mandating complete physical control over infrastructure may need to evaluate whether cloud-based enclaves meet their compliance needs, even with strong technical protections.

Sovereign AI vs. Private AI Compute

Private AI Compute and sovereign AI address different concerns, though there may be some overlap:

Sovereign AI  typically refers to a nation or organization's ability to maintain complete control over AI systems, including the underlying models, infrastructure, and data, often to meet regulatory requirements around data residency and national security.

Private AI Compute, as described, focuses on privacy and security through technical isolation rather than sovereign control. While the data is private and inaccessible to Google, it still processes on Google's cloud infrastructure using Google's Gemini models. This is not a sovereign solution in the traditional sense.

Data Residency: Can Data Remain On-Premises?

No, this is about private cloud computing, not on-premises deployment.Private AI Compute is explicitly a cloud platform that processes data on Google's infrastructure powered by their TPUs. The data leaves the device and travels to Google's cloud, albeit through encrypted channels to hardware-isolated enclaves.

The innovation here isn't keeping data on-premises but rather creating a private, isolated computing environment within the cloud that provides similar privacy guarantees to on-device processing. For organizations that require data to physically remain within their own data centers, Private AI Compute would not satisfy that requirement.

How Businesses Gain

While Google's announcement focuses primarily on consumer applications (Pixel phone features like Magic Cue and Recorder), the underlying architecture suggests several potential business benefits:

Enhanced AI Capabilities with Privacy Preservation
Businesses can leverage powerful cloud-based Gemini models for sensitive tasks without exposing data to the service provider. This enables use cases previously limited to on-premises solutions.

Compliance and Trust
The zero-access architecture may help organizations meet certain privacy and security requirements, particularly in regulated industries where data exposure to third parties is a concern.

Computational Flexibility

Organizations gain access to Google's advanced AI models and TPU infrastructure without needing to invest in equivalent on-premises hardware, while maintaining strong privacy controls.

 Reduced Infrastructure Burden

Companies can avoid the complexity and cost of deploying and maintaining their own AI infrastructure while still achieving enterprise-grade security through hardware-based isolation.

Future-Proof AI Integration

As AI models become more sophisticated and require more computational resources, Private AI Compute provides a path to leverage advancing capabilities without redesigning security architecture.

The Bottom Line

Google Private AI Compute represents an innovative approach to cloud AI processing that uses hardware-based security enclaves to create private computing spaces within the cloud. It successfully addresses the challenge of combining cloud-scale AI power with privacy protection through chip-level security and a zero-access architecture.

However, it's crucial to understand what it is and isn't:

It is:A private cloud solution with strong technical security guarantees, including chip-level protection and encryption, where even Google cannot access processed data.

It is not: An on-premises solution, a sovereign AI platform, or a system where data never leaves your physical infrastructure.

For businesses, the value proposition centers on accessing powerful AI capabilities with privacy assurances that approach on-device security levels. Organizations evaluating Private AI Compute should assess whether cloud-based enclaves meet their specific regulatory, compliance, and data residency requirements, even with the strong technical protections in place.

This analysis is based on Google's blog post "Private AI Compute: our next step in building private and helpful AI)" published by the Google team.

 For technical details, Google has released a technical briefproviding additional information about the architecture.

Tuesday, November 11, 2025

AI glossary for non techies

🤖 AI Terminology Glossary
With simple explainers 
 Agentic
   Simple Meaning: Describes an AI that can plan, act, and course-correct on its own to achieve a complex goal. An AI with a degree of autonomy.

Chunking
  Simple Meaning: The process of breaking down a large piece of text or data into smaller, manageable, and contextually relevant segments before feeding them into an AI model.

Deep Learning (DL)
    Simple Meaning: A more advanced form of Machine Learning that uses neural networks with many layers (deep networks) to analyze complex data like images, sound, and text.

 Generative AI
   Simple Meaning: AI that can create new content, such as text, images, code, or music, rather than just classifying or analyzing existing data.

 Hallucination
   Simple Meaning: A term for when a Generative AI model invents facts or produces confidently stated information that is false, misleading, or nonsensical.
 
Inference
    Simple Meaning: The process of using a trained AI model to make a prediction or arrive at a decision based on new, unseen data.

 Large Language Model (LLM)
    Simple Meaning: An AI model trained on massive amounts of text data to understand, summarize, translate, and generate human-like text.

 Machine Learning (ML)
   Simple Meaning: A type of AI where computers learn from data without being explicitly programmed.

 Model
   Simple Meaning: The core output of the AI training process. It's a file containing all the learned patterns, rules, and knowledge that the AI uses to make predictions or generate content.

 Neural Network
   Simple Meaning: A computational system inspired by the structure and function of the human brain. It consists of interconnected layers of "nodes" (neurons) that process information.
 
 Observability
    Simple Meaning: The ability to understand what is happening inside an AI system—why it made a specific decision, how it's performing, and if it's running efficiently.

 Orchestration
    Simple Meaning: The automated management and coordination of multiple AI models, tools, and data flows to work together as a single, seamless system.

 Parameters
   Simple Meaning: The learned variables or weights inside an AI model that are adjusted during training. These numbers essentially store the model's knowledge.

 Prompt Engineering
   Simple Meaning: The art and science of writing effective instructions or queries (prompts) to get the best and most accurate results from a generative AI model.

  Self-Learning
    Simple Meaning: A broad term describing an AI system that can improve its own performance or adapt its behavior over time without direct human intervention or continuous labeled data.

  Supervised Learning
   Simple Meaning: A type of Machine Learning where the model is trained using labeled data. Every input is paired with the correct output.

  Synthetic Data
   Simple Meaning: Any data that is artificially generated rather than being collected from real-world events. It is created using algorithms.

 Training
    Simple Meaning: The process of feeding data to an AI model so it can learn and adjust its internal settings to perform a specific task.
 
Vector Database
   Simple Meaning: A specialized database designed to efficiently store and retrieve information based on meaning and context rather than keywords.

Transformer

Simple Meaning: A type of neural network architecture that revolutionized Large Language Models (LLMs) by allowing the model to weigh the importance of different parts of the input data (text) when processing it.

Key Feature: The Transformer introduced the attention mechanism, which enables models to understand long-range dependencies in text, making them far more effective at complex language tasks.

Ontology

Simple Meaning: In AI and computer science, an ontology is a formal, explicit specification of a shared conceptualization. Essentially, it defines a set of concepts, categories, properties, and relationships that exist for a domain of discourse.
Analogy: Think of it as a detailed, structured map of knowledge for a specific area (like "healthcare" or "finance"). It ensures that all AI models and systems operating in that domain have a consistent understanding of the terminology and how the concepts are connected.
Application: Helps AI models perform more accurate knowledge reasoning and retrieval, as they aren't guessing the meaning of terms.

Retrieval-Augmented Generation (RAG)

Simple Meaning: A technique that combines the power of an LLM with external knowledge search (retrieval). Before generating an answer, the system first searches a private or proprietary database for relevant information.

Business Value: RAG reduces "hallucination" and ensures the AI's response is grounded in specific, up-to-date, and internal data, making the output accurate and relevant to a company's unique context. It allows LLMs to use knowledge they were not trained on.

Small Language Model (SLM)

Simple Meaning: An AI model with a small number of parameters (typically millions to a few billion). It is designed to be computationally efficient and run quickly on devices with limited resources, like smartphones or embedded hardware.

Trade-off: Offers faster speed and lower cost than Large Language Models (LLMs), but may have less general knowledge and lower performance on highly complex, open-ended tasks.

Medium-Sized Model

Simple Meaning: An AI model that strikes a balance between performance and efficiency. It is larger than an SLM but smaller than the largest LLMs.

Role: Suitable for a wide range of general applications where high accuracy is needed, but the extreme resource cost of the largest models is prohibitive.

Narrow Language Model

Simple Meaning: A model that is specialized or fine-tuned to perform well on a specific set of tasks or within a single domain (e.g., legal, medical, or customer service for one product line).

Business Value: It offers deeper expertise and often higher accuracy than a general model when dealing with domain-specific language and context. A model can be both small and narrow, combining efficiency with specialization.

AI Slop

Simple Meaning: A pejorative term for low-effort, low-quality, mass-produced digital content (text, images, or video) generated by AI, which is perceived to lack human insight, value, or deeper meaning.

Key Characteristic: It prioritizes speed and quantity over substance and quality, often resembling digital clutter or spam created mainly for monetization or cheap engagement.

Business Value Takeaway: To be a thought leader, content must be curated and edited for unique insights, not just generated quickly. High-value content is the opposite of AI slop.

Fine-Tuning

Simple Meaning: The process of taking a pre-trained model (like an LLM) and training it further on a smaller, specific dataset to make it an expert in a particular task or domain.

Multimodality

Simple Meaning: The ability of an AI system to process, understand, and generate information from multiple types of data simultaneously, such as text, images, and audio.

Prompt Injection

Simple Meaning: A type of security attack where a user bypasses the model's safety or system instructions by including a malicious instruction in their prompt.

Reinforcement Learning

Simple Meaning: A type of Machine Learning where an AI agent learns to make a sequence of decisions by interacting with an environment, receiving rewards for good actions and penalties for bad ones (learning by trial and error).

Token

Simple Meaning: The basic unit of text that an LLM uses to process information. Tokens can be whole words, parts of words, or punctuation. The model reads and generates text one token at a time.

Unsupervised Learning

Simple Meaning: A type of Machine Learning where the model is given unlabeled data and must find hidden patterns, structures, or relationships in the data on its own (e.g., grouping customers into categories).

Bias (Algorithmic Bias)

Simple Meaning: Systematic and unfair prejudice in an AI system's results, often due to flaws or imbalances in the training data. The AI reflects and amplifies the biases present in the data it learned from.

Threat: Leads to unequal or harmful outcomes for certain groups, for example, in loan approvals or hiring decisions.

Drift (Model Drift)

Simple Meaning: The phenomenon where the performance or accuracy of a deployed AI model decreases over time because the real-world data it receives (the input) starts to change and deviate significantly from the data it was originally trained on.

Threat: A model that was once highly accurate becomes unreliable or makes irrelevant decisions without warning.

Explainable AI (XAI)

Simple Meaning: A set of methods and techniques that allow humans to understand and trust the results created by machine learning algorithms. It answers the question, "Why did the AI make that decision?"

Mitigation: By providing transparency, XAI helps identify and fix threats like bias and lack of fairness.

Security (AI Security)

Simple Meaning: The practice of protecting AI models and the data they use from malicious attacks, such as prompt injection, data poisoning (tampering with training data), or adversarial attacks (subtly altering input to cause errors).

Threat: Attacks can compromise data integrity, system reliability, and the confidentiality of information.

PyTorch

Simple Meaning: A widely used, open-source machine learning framework developed by Meta AI. It provides a flexible and efficient platform for building and training deep learning models.  

Key Feature: PyTorch is known for its dynamic computation graph (allowing for easier experimentation and debugging) and is a fundamental tool for researchers and developers in the AI industry. 

Grounding

Simple Meaning: The concept of ensuring an AI model's output is accurate and logically connected to verifiable real-world facts or data sources, rather than relying solely on the patterns learned during training.

Goal: To make the AI's response trustworthy and auditable. In Large Language Models (LLMs), Grounding is often achieved through techniques like RAG (Retrieval-Augmented Generation), where the model retrieves information from a specific database before forming its answer.



Monday, August 18, 2025

AI Search is Rewriting the Rules. Is Your Marketing Strategy Ready to Play? 🤔

The way customers find information online is undergoing a seismic shift, thanks to AI-powered search like Google's AI Mode and other generative models. As someone who's navigated the marketing tech maze for years, this feels like one of the more significant disruptions. Staying visible requires more than just tweaking keywords; it demands a fundamental rethink.

Understanding how AI changes search visibility is the first step to adapting. Our latest research at Info-Tech Research Group, "Stay Relevant in the Era of AI-Powered Search," dives into the practical implications and strategies needed now.

Key Considerations (Based on the Research):
1. Beyond Keywords: AI understands intent and context. Focus shifts towards addressing complex user queries conversationally.
2. E-E-A-T is Paramount: Expertise, Experience, Authoritativeness, and Trustworthiness are becoming even more critical signals for AI. How are you demonstrating yours?
3. Content Diversification: AI pulls from various sources. Think articles, videos, structured data, forum discussions. Is your content mix ready?
4. Zero-Click Threat: AI often provides direct answers, potentially reducing clicks. How will you adapt your goals and measurement?

Thank you to experts who were instrumental in helping me in this research Raj Khera, Kathleen Devilbiss, Janet Driscoll Miller, Mark Alves and my other friends.

This way of information gathering impacts SEO, content strategy, and even how we measure success. What's the single biggest challenge you foresee (or are already facing) with AI impacting your search visibility? Share your thoughts in the comments below! 👇

Adapting to the AI search era isn't just about staying current; it's about ensuring your brand remains discoverable and relevant. Download your copy of my research for the Info-Tech Research Group here: 
Stay Relevant in the Era of AI-Powered Search | Info-Tech Research Group https://share.google/SbnYW1IZtR8wFQQim

#AISearch #SEO #DigitalMarketing #ContentStrategy #MarketingStrategy #SGE #AI #TechTrends #Marketing #CMO

Monday, April 17, 2023

AI generated summary of 60 minutes AI edition

In a 60 Minutes interview that aired on April 17, 2023, Google CEO Sundar Pichai warned that artificial intelligence (AI) is advancing rapidly and that society needs to be prepared for the impact it will have on our lives. Pichai said that AI has the potential to be more profound than the discovery of fire or electricity, but that it also has the potential to be harmful if it is not used responsibly.Pichai discussed some of the ways that AI is already being used in our lives, such as in self-driving cars, medical diagnosis, and customer service. He also talked about some of the potential risks of AI, such as job displacement, bias, and privacy concerns.Pichai said that Google is committed to using AI for good, and that the company is working on developing ethical guidelines for AI development. He also said that Google is working with governments around the world to develop regulations for AI.Pichai's interview is a reminder that AI is a powerful technology that has the potential to change our world in profound ways. It is important to be aware of the potential benefits and risks of AI, and to work to ensure that AI is used for good.Here are some of the key takeaways from the interview:
AI is advancing rapidly and has the potential to be more profound than the discovery of fire or electricity.
AI has the potential to be used for good or for harm.
Google is committed to using AI for good and is working on developing ethical guidelines for AI development.
Google is working with governments around the world to develop regulations for AI.
It is important to be aware of the potential benefits and risks of AI, and to work to ensure that AI is used for good.

Can bees be sentient?

According to a growing body of research, bees are sentient, self-aware and have emotional lives³. A study by Newcastle University researchers Melissa Bateson and Jeri Wright found that honeybee response "has more in common with that of vertebrates than previously thought" and "suggest that honeybees could be regarded as exhibiting emotions"¹. The study was published in Current Biology in June 2011¹.


An experiment where honey bees were shaken for 60 seconds to simulate a predatory attack found that the bees' equivalent of blood — called haemolymph — was lower in dopamine and serotonin after shaking, compared with bees that weren't shaken. Both dopamine and serotonin are involved in mood regulation in humans².


The research raises questions about how bees are used in the agricultural sector or in laboratory experiments, where they've traditionally been seen as little more than tools³.


Source: Conversation with Bing, 4/17/2023(1) New research: bees are sentient, self aware, are emotional | RNZ. https://www.rnz.co.nz/national/programmes/first-up/audio/2018886340/new-research-bees-are-sentient-self-aware-are-emotional Accessed 4/17/2023.

(2) Honeybees Might Have Emotions | WIRED. https://www.wired.com/2011/06/honeybee-pessimism/ Accessed 4/17/2023.

(3) Do insects, octopus and other invertebrates feel emotions ... - ABC. https://www.abc.net.au/news/science/2022-04-02/invertebrates-octopus-bees-feelings-emotion-pain-joy-science/100947014 Accessed 4/17/2023.

(4) ‘Bees are sentient’: inside the stunning brains of nature’s hardest .... https://www.theguardian.com/environment/2023/apr/02/bees-intelligence-minds-pollination Accessed 4/17/2023.

(5) New research deepens mystery about evolution of bees' social behavior. https://www.sciencedaily.com/releases/2021/05/210526132133.htm Accessed 4/17/2023.


Sources:

: Bateson, M., & Wright, G. A. (2011). Reversal learning and affective responses in honeybees (Apis mellifera). Current Biology, 21(6), 1-4.

: Perry, C. J., Baciadonna, L., & Chittka, L. (2016). Unexpected rewards induce dopamine-dependent positive emotion-like state changes in bumblebees. Science, 353(6307), 1529-1531.

: "New research: bees are sentient, self aware, are emotional". RNZ. Retrieved April 17, 2023.

Friday, April 07, 2023

AI may not exactly follow your instructions! But that is Ok.

A Large Learning Model uses AI/Machine Learning to recognize the data presented to it and connect the dots. Sometimes using initiative to add to your prompt.

Here is an example. 

I am sitting in the library and these are the actual pictures.


I use the Bing Image Creator powered by Dall-E to give it this prompt. 
I am sitting in a library and working on my laptop, The laptop is in front of me as I type. To my left is the laptop's power cord and below it is my phone that is charging. on my right is a mousepad with my mouse and in front of it is a water bottle that is made of steel and has a blue top. The table I am sitting on has 5 chairs all spread out across the table. On the chair next to me is my backpack. I am Indian and I am wearing a black jacket as it is a bit cold here.

Here are the images that I get back.





The additions in the image that I did not mention were - notebooks, blue chairs in the library, and the position of the mouse. The laptop bag is not on the chair. 

My Observations:

  • It got the place library right, books, bookshelves, chairs. It has enough data to recognize the visual components of the library
  • Telling the AI that I am an Indian meant that I had to accept what the Ai thinks is a consolidation of what it has learned as photos of people from India. I did not say if I am a boy or a girl but the model may have information from my profile or not.
  • Waterbottle is a no brainer
  • The model has not learned what a "on the chair next to me is my laptop bag" There may not be many photos in the learning data about this scene
I think both Microsoft, Open AI, and Google are throttling back some of the capabilities as it may be easy to improve the results of my prompt by adding more learning. I think the concern is if the model returns results too close to reality there may be a lot of anxiety about what could happen next. 

Bard's Response:

It sounds like you are working hard in the library. I hope you are finding it a quiet and productive place to work.

I can picture you sitting at a table with your laptop in front of you. To your left is the laptop's power cord, and below it is your phone that is charging. On your right is a mousepad with your mouse, and in front of it is a water bottle that is made of steel and has a blue top. The table you are sitting at has 5 chairs, all spread out across the table. On the chair next to you is your backpack. You are Indian, and you are wearing a black jacket as it is a bit cold here.

I hope you have a great day!


 

 



25th Anniversary of the World Wide Web

Meeting Tim Berners-Lee at SXSW #IEEE Event On August 6th 1991 when Tim Berners-Lee sent a message to a public list announcing the WWW p...