Thursday, November 13, 2025

Microsoft's AI Superfactory: Connecting Datacenters Across States to Build a Distributed Supercomputer

In a significant shift from traditional datacenter architecture, Microsoft has launched its first "AI superfactory" by connecting datacenters in Atlanta and Wisconsin through a dedicated high-speed network to function as a unified system for massive AI workloads. This marks a fundamental reimagining of how AI infrastructure is designed and deployed at hyperscale.

Based on reporting from Microsoft Source and The Official Microsoft Blog 

What is an AI Superfactory?

Unlike traditional datacenters designed to run millions of separate applications for multiple customers, Microsoft's AI superfactory runs one complex job across millions of pieces of hardware, with a network of sites supporting that single task.

 The Atlanta facility, which began operation in October, is the second in Microsoft's Fairwater family and shares the same architecture as the company's recently announced investment in Wisconsin.

The key innovation? These Fairwater AI datacenters are directly connected to each other through a new type of dedicated network allowing data to flow between them extremely quickly, creating what Microsoft describes as a "planet-scale AI superfactory."

Why Connect Datacenters Across 700 Miles?

Training AI models requires hundreds of thousands of the latest NVIDIA GPUs working together on a massive compute job, with each GPU processing a slice of training data and sharing results with all others, requiring all GPUs to update the AI model simultaneously. Any bottleneck holds up the entire operation, leaving expensive GPUs sitting idle.

But if speed is critical, why build sites so far apart? The answer lies in power availability.

 To ensure access to enough power, Fairwater has been distributed across multiple geographic regions, allowing Microsoft to tap into various different power sources and avoid exhausting available energy in one location. The Wisconsin and Atlanta sites are approximately 700 miles apart, spanning five states.

Revolutionary Architecture and Design

Two-Story Density Innovation

The two-story datacenter building approach allows for placement of racks in three dimensions to minimize cable lengths, which improves latency, bandwidth, reliability and cost. This matters because many AI workloads are very sensitive to latency, meaning cable run lengths can meaningfully impact cluster performance.

Cutting-Edge Hardware

Fairwater Atlanta features NVIDIA GB200 NVL72 rack-scale systems that can scale to hundreds of thousands of NVIDIA Blackwell GPUs, with a new chip and rack architecture that delivers the highest throughput per rack of any cloud platform available today.

The facility can support around 140kW per rack and 1,360kW per row, with each rack housing up to 72 Blackwell GPUs connected via NVLink.

Advanced Cooling System

Microsoft engineered a complex closed-loop cooling system for its Fairwater sites to take hot liquid out of the building to be chilled and returned to the GPUs. Remarkably, the water used in Fairwater Atlanta's initial fill is equivalent to what 20 homes consume in a year and is replaced only if water chemistry indicates it is needed.

Power Innovation

The Atlanta site was selected with resilient utility power in mind and is capable of achieving 4×9 availability at 3×9 cost. By securing highly available grid power, Microsoft was able to forgo on-site generation, UPS systems, and dual-corded distribution, allowing it to reduce time-to-market and operate at a lower cost.

The AI WAN: Stitching Sites Together

Microsoft has created a high-performance, high-resiliency backbone that directly connects different generations of supercomputers into an AI superfactory that exceeds the capabilities of a single site across geographically diverse locations.

This AI WAN empowers AI developers to tap Microsoft's broader network of Azure AI datacenters, segmenting traffic based on their needs across scale-up and scale-out networks within a site, as well as across sites via the continent-spanning AI WAN. This is a departure from the past where all traffic had to use the same network regardless of workload requirements.

Scale and Impact

The numbers are staggering. Microsoft spent more than $34 billion on capital expenditures in its most recent quarter, much of it on datacenters and GPUs, to keep up with soaring AI demand.

The Fairwater network will use "multigigawatts" of power, and one of the biggest customers will be OpenAI, which is already heavily reliant on Microsoft for its compute infrastructure needs. It will also cater to other AI firms including French startup Mistral AI and Elon Musk's xAI Corp, while Microsoft reserves some capacity for training its proprietary models.

How Businesses Gain

Accelerated Model Development

This approach means that instead of a single facility training an AI model, multiple sites work in tandem on the same task, enabling what the company calls a "superfactory" capable of training models in weeks instead of months.

Access to Frontier Computing Power

Businesses partnering with Microsoft gain access to what is effectively a distributed supercomputer without building their own infrastructure. The result is a commercialized shared supercomputer—a superfactory—sold as Azure capacity, providing enterprise customers access to frontier-scale computing that would be prohibitively expensive to build independently.

Improved Resource Utilization

The infrastructure provides fit-for-purpose networking at a more granular level and helps create fungibility to maximize the flexibility and utilization of infrastructure. This means businesses can better match their workloads to the appropriate computing resources.

Shorter Iteration Cycles

Microsoft argues the superfactory model cuts training cycles from months to weeks for large models by eliminating I/O and communication bottlenecks and by enabling much larger parallelism. For enterprises and model developers, shorter iteration cycles translate directly to faster productization and competitive advantage.

 Future-Scale Readiness

The design goal is to support the training of future AI models with parameter scales reaching trillions, as AI training workflows grow increasingly complex, encompassing stages such as pre-training, fine-tuning, reinforcement learning, and evaluation.

The Broader Context

Microsoft's announcement shows the rapid pace of the AI infrastructure race among the world's largest tech companies, with Amazon taking a similar approach with its Project Rainier complex in Indiana, while Meta, Google, OpenAI and Anthropic are making similar multibillion-dollar bets.

Microsoft has quietly moved from single-site, ultra-dense GPU farms to a deliberately networked approach, marking a shift in hyperscale thinking: designing buildings not as separate multi-tenant halls but as tightly engineered compute modules that can be federated into one distributed compute fabric.

 What This Means for the Future

Microsoft's AI superfactory represents more than just bigger datacenters—it's a fundamental rethinking of how AI infrastructure should work at scale. By treating multiple geographically distributed sites as a single unified system, Microsoft is addressing the twin challenges of AI computing: the need for massive computational power and the practical limits of power availability and cooling at any single location.

For businesses, this means access to AI capabilities that were previously available only to those who could build their own supercomputing infrastructure. The superfactory model democratizes access to frontier AI computing while accelerating the pace of innovation across the industry.

As AI models continue to grow in size and capability, the superfactory approach may become the new standard for how hyperscalers deliver AI services—not through isolated datacenters, but through interconnected networks of specialized facilities working as one.

25th Anniversary of the World Wide Web

Meeting Tim Berners-Lee at SXSW #IEEE Event
On August 6th 1991 when Tim Berners-Lee sent a message to a public list announcing the WWW project.  Another world disrupting event was taking place in the same month, the August 1991 Soviet Coup. I was on a holiday in India when the coup happened and heard the news from my friend M A Deviah who then worked for the Indian Express in Bangalore.

The Tim Berners-Lee announcement of the World Wide Web I recall did not make the news. In 1991 my exposure to computers was :

Zero-Access Cloud AI: How Google Built a System Even They Can't See Into

Google Private AI Compute: Understanding the Architecture and Business Value
Google has introduced Private AI Compute, a new approach to cloud-based AI processing that promises enterprise-grade security while leveraging powerful cloud models. In their recent blog post "Private AI Compute: our next step in building private and helpful AI," the Google team outlines how this technology works and what it means for the future of private AI computing.

What is Private AI Compute?

Private AI Compute represents Google's solution to a fundamental challenge in AI:

 how to deliver the computational power of advanced cloud models while maintaining the privacy guarantees typically associated with on-device processing. As AI capabilities evolve to handle more complex reasoning and proactive assistance, on-device processing alone often lacks the necessary computational resources.

The technology creates what Google describes as a "secure, fortified space" in the cloud that processes sensitive data with an additional layer of security beyond Google's existing AI safeguards.

Chip-Level Security Architecture

The system runs on Google's custom Tensor Processing Units (TPUs) with Titanium Intelligence Enclaves (TIE) integrated directly into the hardware architecture. This design embeds security at the silicon level, creating a hardware-secured sealed cloud environment that processes data within a specialized, protected space.

The architecture uses remote attestation and encryption to establish secure connections between user devices and these hardware-protected enclaves, ensuring that the computing environment itself is verifiable and tamper-resistant.

No Access to Provider (Including Google)

According to Google's announcement, "sensitive data processed by Private AI Compute remains accessible only to you and no one else, not even Google." The system uses remote attestation and encryption to create a boundary where personal information and user insights are isolated within the trusted computing environment.

This represents a significant departure from traditional cloud AI processing, where the service provider typically has some level of access to data being processed.

 Information Encryption

 Private AI Compute employs encryption alongside remote attestation to connect devices to the hardware-secured cloud environment. This ensures that data in transit and during processing remains protected within the specialized space created by Titanium Intelligence Enclaves.

 Same Level of Security as On-Premises?

Google positions Private AI Compute as delivering "the same security and privacy assurances you expect from on-device processing" while providing cloud-scale computational power. 

For businesses evaluating this against on-premises deployments, the comparison is nuanced. Private AI Compute offers:
- Hardware-based security through custom silicon and enclaves
- Zero-access architecture (even from Google)
- Integration with Google's Secure AI Framework and AI Principles

However, it's important to note that this is fundamentally a cloud service, not an on-premises deployment. Organizations with strict data residency requirements or those mandating complete physical control over infrastructure may need to evaluate whether cloud-based enclaves meet their compliance needs, even with strong technical protections.

Sovereign AI vs. Private AI Compute

Private AI Compute and sovereign AI address different concerns, though there may be some overlap:

Sovereign AI  typically refers to a nation or organization's ability to maintain complete control over AI systems, including the underlying models, infrastructure, and data, often to meet regulatory requirements around data residency and national security.

Private AI Compute, as described, focuses on privacy and security through technical isolation rather than sovereign control. While the data is private and inaccessible to Google, it still processes on Google's cloud infrastructure using Google's Gemini models. This is not a sovereign solution in the traditional sense.

Data Residency: Can Data Remain On-Premises?

No, this is about private cloud computing, not on-premises deployment.Private AI Compute is explicitly a cloud platform that processes data on Google's infrastructure powered by their TPUs. The data leaves the device and travels to Google's cloud, albeit through encrypted channels to hardware-isolated enclaves.

The innovation here isn't keeping data on-premises but rather creating a private, isolated computing environment within the cloud that provides similar privacy guarantees to on-device processing. For organizations that require data to physically remain within their own data centers, Private AI Compute would not satisfy that requirement.

How Businesses Gain

While Google's announcement focuses primarily on consumer applications (Pixel phone features like Magic Cue and Recorder), the underlying architecture suggests several potential business benefits:

Enhanced AI Capabilities with Privacy Preservation
Businesses can leverage powerful cloud-based Gemini models for sensitive tasks without exposing data to the service provider. This enables use cases previously limited to on-premises solutions.

Compliance and Trust
The zero-access architecture may help organizations meet certain privacy and security requirements, particularly in regulated industries where data exposure to third parties is a concern.

Computational Flexibility

Organizations gain access to Google's advanced AI models and TPU infrastructure without needing to invest in equivalent on-premises hardware, while maintaining strong privacy controls.

 Reduced Infrastructure Burden

Companies can avoid the complexity and cost of deploying and maintaining their own AI infrastructure while still achieving enterprise-grade security through hardware-based isolation.

Future-Proof AI Integration

As AI models become more sophisticated and require more computational resources, Private AI Compute provides a path to leverage advancing capabilities without redesigning security architecture.

The Bottom Line

Google Private AI Compute represents an innovative approach to cloud AI processing that uses hardware-based security enclaves to create private computing spaces within the cloud. It successfully addresses the challenge of combining cloud-scale AI power with privacy protection through chip-level security and a zero-access architecture.

However, it's crucial to understand what it is and isn't:

It is:A private cloud solution with strong technical security guarantees, including chip-level protection and encryption, where even Google cannot access processed data.

It is not: An on-premises solution, a sovereign AI platform, or a system where data never leaves your physical infrastructure.

For businesses, the value proposition centers on accessing powerful AI capabilities with privacy assurances that approach on-device security levels. Organizations evaluating Private AI Compute should assess whether cloud-based enclaves meet their specific regulatory, compliance, and data residency requirements, even with the strong technical protections in place.

This analysis is based on Google's blog post "Private AI Compute: our next step in building private and helpful AI)" published by the Google team.

 For technical details, Google has released a technical briefproviding additional information about the architecture.

Tuesday, November 11, 2025

AI glossary for non techies

🤖 AI Terminology Glossary
With simple explainers 
 Agentic
   Simple Meaning: Describes an AI that can plan, act, and course-correct on its own to achieve a complex goal. An AI with a degree of autonomy.

Chunking
  Simple Meaning: The process of breaking down a large piece of text or data into smaller, manageable, and contextually relevant segments before feeding them into an AI model.

Deep Learning (DL)
    Simple Meaning: A more advanced form of Machine Learning that uses neural networks with many layers (deep networks) to analyze complex data like images, sound, and text.

 Generative AI
   Simple Meaning: AI that can create new content, such as text, images, code, or music, rather than just classifying or analyzing existing data.

 Hallucination
   Simple Meaning: A term for when a Generative AI model invents facts or produces confidently stated information that is false, misleading, or nonsensical.
 
Inference
    Simple Meaning: The process of using a trained AI model to make a prediction or arrive at a decision based on new, unseen data.

 Large Language Model (LLM)
    Simple Meaning: An AI model trained on massive amounts of text data to understand, summarize, translate, and generate human-like text.

 Machine Learning (ML)
   Simple Meaning: A type of AI where computers learn from data without being explicitly programmed.

 Model
   Simple Meaning: The core output of the AI training process. It's a file containing all the learned patterns, rules, and knowledge that the AI uses to make predictions or generate content.

 Neural Network
   Simple Meaning: A computational system inspired by the structure and function of the human brain. It consists of interconnected layers of "nodes" (neurons) that process information.
 
 Observability
    Simple Meaning: The ability to understand what is happening inside an AI system—why it made a specific decision, how it's performing, and if it's running efficiently.

 Orchestration
    Simple Meaning: The automated management and coordination of multiple AI models, tools, and data flows to work together as a single, seamless system.

 Parameters
   Simple Meaning: The learned variables or weights inside an AI model that are adjusted during training. These numbers essentially store the model's knowledge.

 Prompt Engineering
   Simple Meaning: The art and science of writing effective instructions or queries (prompts) to get the best and most accurate results from a generative AI model.

  Self-Learning
    Simple Meaning: A broad term describing an AI system that can improve its own performance or adapt its behavior over time without direct human intervention or continuous labeled data.

  Supervised Learning
   Simple Meaning: A type of Machine Learning where the model is trained using labeled data. Every input is paired with the correct output.

  Synthetic Data
   Simple Meaning: Any data that is artificially generated rather than being collected from real-world events. It is created using algorithms.

 Training
    Simple Meaning: The process of feeding data to an AI model so it can learn and adjust its internal settings to perform a specific task.
 
Vector Database
   Simple Meaning: A specialized database designed to efficiently store and retrieve information based on meaning and context rather than keywords.

Transformer

Simple Meaning: A type of neural network architecture that revolutionized Large Language Models (LLMs) by allowing the model to weigh the importance of different parts of the input data (text) when processing it.

Key Feature: The Transformer introduced the attention mechanism, which enables models to understand long-range dependencies in text, making them far more effective at complex language tasks.

Ontology

Simple Meaning: In AI and computer science, an ontology is a formal, explicit specification of a shared conceptualization. Essentially, it defines a set of concepts, categories, properties, and relationships that exist for a domain of discourse.
Analogy: Think of it as a detailed, structured map of knowledge for a specific area (like "healthcare" or "finance"). It ensures that all AI models and systems operating in that domain have a consistent understanding of the terminology and how the concepts are connected.
Application: Helps AI models perform more accurate knowledge reasoning and retrieval, as they aren't guessing the meaning of terms.

Retrieval-Augmented Generation (RAG)

Simple Meaning: A technique that combines the power of an LLM with external knowledge search (retrieval). Before generating an answer, the system first searches a private or proprietary database for relevant information.

Business Value: RAG reduces "hallucination" and ensures the AI's response is grounded in specific, up-to-date, and internal data, making the output accurate and relevant to a company's unique context. It allows LLMs to use knowledge they were not trained on.

Small Language Model (SLM)

Simple Meaning: An AI model with a small number of parameters (typically millions to a few billion). It is designed to be computationally efficient and run quickly on devices with limited resources, like smartphones or embedded hardware.

Trade-off: Offers faster speed and lower cost than Large Language Models (LLMs), but may have less general knowledge and lower performance on highly complex, open-ended tasks.

Medium-Sized Model

Simple Meaning: An AI model that strikes a balance between performance and efficiency. It is larger than an SLM but smaller than the largest LLMs.

Role: Suitable for a wide range of general applications where high accuracy is needed, but the extreme resource cost of the largest models is prohibitive.

Narrow Language Model

Simple Meaning: A model that is specialized or fine-tuned to perform well on a specific set of tasks or within a single domain (e.g., legal, medical, or customer service for one product line).

Business Value: It offers deeper expertise and often higher accuracy than a general model when dealing with domain-specific language and context. A model can be both small and narrow, combining efficiency with specialization.

AI Slop

Simple Meaning: A pejorative term for low-effort, low-quality, mass-produced digital content (text, images, or video) generated by AI, which is perceived to lack human insight, value, or deeper meaning.

Key Characteristic: It prioritizes speed and quantity over substance and quality, often resembling digital clutter or spam created mainly for monetization or cheap engagement.

Business Value Takeaway: To be a thought leader, content must be curated and edited for unique insights, not just generated quickly. High-value content is the opposite of AI slop.

Fine-Tuning

Simple Meaning: The process of taking a pre-trained model (like an LLM) and training it further on a smaller, specific dataset to make it an expert in a particular task or domain.

Multimodality

Simple Meaning: The ability of an AI system to process, understand, and generate information from multiple types of data simultaneously, such as text, images, and audio.

Prompt Injection

Simple Meaning: A type of security attack where a user bypasses the model's safety or system instructions by including a malicious instruction in their prompt.

Reinforcement Learning

Simple Meaning: A type of Machine Learning where an AI agent learns to make a sequence of decisions by interacting with an environment, receiving rewards for good actions and penalties for bad ones (learning by trial and error).

Token

Simple Meaning: The basic unit of text that an LLM uses to process information. Tokens can be whole words, parts of words, or punctuation. The model reads and generates text one token at a time.

Unsupervised Learning

Simple Meaning: A type of Machine Learning where the model is given unlabeled data and must find hidden patterns, structures, or relationships in the data on its own (e.g., grouping customers into categories).

Bias (Algorithmic Bias)

Simple Meaning: Systematic and unfair prejudice in an AI system's results, often due to flaws or imbalances in the training data. The AI reflects and amplifies the biases present in the data it learned from.

Threat: Leads to unequal or harmful outcomes for certain groups, for example, in loan approvals or hiring decisions.

Drift (Model Drift)

Simple Meaning: The phenomenon where the performance or accuracy of a deployed AI model decreases over time because the real-world data it receives (the input) starts to change and deviate significantly from the data it was originally trained on.

Threat: A model that was once highly accurate becomes unreliable or makes irrelevant decisions without warning.

Explainable AI (XAI)

Simple Meaning: A set of methods and techniques that allow humans to understand and trust the results created by machine learning algorithms. It answers the question, "Why did the AI make that decision?"

Mitigation: By providing transparency, XAI helps identify and fix threats like bias and lack of fairness.

Security (AI Security)

Simple Meaning: The practice of protecting AI models and the data they use from malicious attacks, such as prompt injection, data poisoning (tampering with training data), or adversarial attacks (subtly altering input to cause errors).

Threat: Attacks can compromise data integrity, system reliability, and the confidentiality of information.

PyTorch

Simple Meaning: A widely used, open-source machine learning framework developed by Meta AI. It provides a flexible and efficient platform for building and training deep learning models.  

Key Feature: PyTorch is known for its dynamic computation graph (allowing for easier experimentation and debugging) and is a fundamental tool for researchers and developers in the AI industry. 

Grounding

Simple Meaning: The concept of ensuring an AI model's output is accurate and logically connected to verifiable real-world facts or data sources, rather than relying solely on the patterns learned during training.

Goal: To make the AI's response trustworthy and auditable. In Large Language Models (LLMs), Grounding is often achieved through techniques like RAG (Retrieval-Augmented Generation), where the model retrieves information from a specific database before forming its answer.



Monday, August 18, 2025

AI Search is Rewriting the Rules. Is Your Marketing Strategy Ready to Play? 🤔

The way customers find information online is undergoing a seismic shift, thanks to AI-powered search like Google's AI Mode and other generative models. As someone who's navigated the marketing tech maze for years, this feels like one of the more significant disruptions. Staying visible requires more than just tweaking keywords; it demands a fundamental rethink.

Understanding how AI changes search visibility is the first step to adapting. Our latest research at Info-Tech Research Group, "Stay Relevant in the Era of AI-Powered Search," dives into the practical implications and strategies needed now.

Key Considerations (Based on the Research):
1. Beyond Keywords: AI understands intent and context. Focus shifts towards addressing complex user queries conversationally.
2. E-E-A-T is Paramount: Expertise, Experience, Authoritativeness, and Trustworthiness are becoming even more critical signals for AI. How are you demonstrating yours?
3. Content Diversification: AI pulls from various sources. Think articles, videos, structured data, forum discussions. Is your content mix ready?
4. Zero-Click Threat: AI often provides direct answers, potentially reducing clicks. How will you adapt your goals and measurement?

Thank you to experts who were instrumental in helping me in this research Raj Khera, Kathleen Devilbiss, Janet Driscoll Miller, Mark Alves and my other friends.

This way of information gathering impacts SEO, content strategy, and even how we measure success. What's the single biggest challenge you foresee (or are already facing) with AI impacting your search visibility? Share your thoughts in the comments below! 👇

Adapting to the AI search era isn't just about staying current; it's about ensuring your brand remains discoverable and relevant. Download your copy of my research for the Info-Tech Research Group here: 
Stay Relevant in the Era of AI-Powered Search | Info-Tech Research Group https://share.google/SbnYW1IZtR8wFQQim

#AISearch #SEO #DigitalMarketing #ContentStrategy #MarketingStrategy #SGE #AI #TechTrends #Marketing #CMO

Monday, April 17, 2023

AI generated summary of 60 minutes AI edition

In a 60 Minutes interview that aired on April 17, 2023, Google CEO Sundar Pichai warned that artificial intelligence (AI) is advancing rapidly and that society needs to be prepared for the impact it will have on our lives. Pichai said that AI has the potential to be more profound than the discovery of fire or electricity, but that it also has the potential to be harmful if it is not used responsibly.Pichai discussed some of the ways that AI is already being used in our lives, such as in self-driving cars, medical diagnosis, and customer service. He also talked about some of the potential risks of AI, such as job displacement, bias, and privacy concerns.Pichai said that Google is committed to using AI for good, and that the company is working on developing ethical guidelines for AI development. He also said that Google is working with governments around the world to develop regulations for AI.Pichai's interview is a reminder that AI is a powerful technology that has the potential to change our world in profound ways. It is important to be aware of the potential benefits and risks of AI, and to work to ensure that AI is used for good.Here are some of the key takeaways from the interview:
AI is advancing rapidly and has the potential to be more profound than the discovery of fire or electricity.
AI has the potential to be used for good or for harm.
Google is committed to using AI for good and is working on developing ethical guidelines for AI development.
Google is working with governments around the world to develop regulations for AI.
It is important to be aware of the potential benefits and risks of AI, and to work to ensure that AI is used for good.

Can bees be sentient?

According to a growing body of research, bees are sentient, self-aware and have emotional lives³. A study by Newcastle University researchers Melissa Bateson and Jeri Wright found that honeybee response "has more in common with that of vertebrates than previously thought" and "suggest that honeybees could be regarded as exhibiting emotions"¹. The study was published in Current Biology in June 2011¹.


An experiment where honey bees were shaken for 60 seconds to simulate a predatory attack found that the bees' equivalent of blood — called haemolymph — was lower in dopamine and serotonin after shaking, compared with bees that weren't shaken. Both dopamine and serotonin are involved in mood regulation in humans².


The research raises questions about how bees are used in the agricultural sector or in laboratory experiments, where they've traditionally been seen as little more than tools³.


Source: Conversation with Bing, 4/17/2023(1) New research: bees are sentient, self aware, are emotional | RNZ. https://www.rnz.co.nz/national/programmes/first-up/audio/2018886340/new-research-bees-are-sentient-self-aware-are-emotional Accessed 4/17/2023.

(2) Honeybees Might Have Emotions | WIRED. https://www.wired.com/2011/06/honeybee-pessimism/ Accessed 4/17/2023.

(3) Do insects, octopus and other invertebrates feel emotions ... - ABC. https://www.abc.net.au/news/science/2022-04-02/invertebrates-octopus-bees-feelings-emotion-pain-joy-science/100947014 Accessed 4/17/2023.

(4) ‘Bees are sentient’: inside the stunning brains of nature’s hardest .... https://www.theguardian.com/environment/2023/apr/02/bees-intelligence-minds-pollination Accessed 4/17/2023.

(5) New research deepens mystery about evolution of bees' social behavior. https://www.sciencedaily.com/releases/2021/05/210526132133.htm Accessed 4/17/2023.


Sources:

: Bateson, M., & Wright, G. A. (2011). Reversal learning and affective responses in honeybees (Apis mellifera). Current Biology, 21(6), 1-4.

: Perry, C. J., Baciadonna, L., & Chittka, L. (2016). Unexpected rewards induce dopamine-dependent positive emotion-like state changes in bumblebees. Science, 353(6307), 1529-1531.

: "New research: bees are sentient, self aware, are emotional". RNZ. Retrieved April 17, 2023.

Microsoft's AI Superfactory: Connecting Datacenters Across States to Build a Distributed Supercomputer

In a significant shift from traditional datacenter architecture, Microsoft has l...