vault backup: 2026-03-08 01:13:08
This commit is contained in:
28
Obsidian/.obsidian/workspace.json
vendored
28
Obsidian/.obsidian/workspace.json
vendored
@@ -20,8 +20,23 @@
|
||||
"icon": "lucide-file",
|
||||
"title": "Qwen3.5"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "81b68ab88145cf57",
|
||||
"type": "leaf",
|
||||
"state": {
|
||||
"type": "markdown",
|
||||
"state": {
|
||||
"file": "Google Gemini.md",
|
||||
"mode": "source",
|
||||
"source": false
|
||||
},
|
||||
"icon": "lucide-file",
|
||||
"title": "Google Gemini"
|
||||
}
|
||||
}
|
||||
]
|
||||
],
|
||||
"currentTab": 1
|
||||
}
|
||||
],
|
||||
"direction": "vertical"
|
||||
@@ -94,7 +109,7 @@
|
||||
"state": {
|
||||
"type": "backlink",
|
||||
"state": {
|
||||
"file": "Qwen3.5.md",
|
||||
"file": "Google Gemini.md",
|
||||
"collapseAll": false,
|
||||
"extraContext": false,
|
||||
"sortOrder": "alphabetical",
|
||||
@@ -104,7 +119,7 @@
|
||||
"unlinkedCollapsed": true
|
||||
},
|
||||
"icon": "links-coming-in",
|
||||
"title": "Обратные ссылки для Qwen3.5"
|
||||
"title": "Обратные ссылки для Google Gemini"
|
||||
}
|
||||
},
|
||||
{
|
||||
@@ -181,7 +196,7 @@
|
||||
"state": {
|
||||
"type": "footnotes",
|
||||
"state": {
|
||||
"file": "Qwen3.5.md"
|
||||
"file": "Google Gemini.md"
|
||||
},
|
||||
"icon": "lucide-file-signature",
|
||||
"title": "Сноски"
|
||||
@@ -209,11 +224,12 @@
|
||||
"workspaces:Пространства": false
|
||||
}
|
||||
},
|
||||
"active": "05197fe4b10dbf5f",
|
||||
"active": "81b68ab88145cf57",
|
||||
"lastOpenFiles": [
|
||||
"Qwen3.5.md",
|
||||
"Google Gemini.md",
|
||||
"Без названия.canvas",
|
||||
"Добро пожаловать.md",
|
||||
"Qwen3.5.md",
|
||||
"Без названия.base"
|
||||
]
|
||||
}
|
||||
461
Obsidian/Google Gemini.md
Normal file
461
Obsidian/Google Gemini.md
Normal file
@@ -0,0 +1,461 @@
|
||||

|
||||
|
||||
Developer
|
||||
Google DeepMind
|
||||
|
||||
Initial Release
|
||||
December 6, 2023
|
||||
|
||||
Status
|
||||
Active
|
||||
|
||||
Type
|
||||
Multimodal large language model
|
||||
|
||||
License
|
||||
Proprietary
|
||||
|
||||
Website
|
||||
[ai.google.dev/gemini-api/docs/models](https://ai.google.dev/gemini-api/docs/models)
|
||||
|
||||
Predecessor
|
||||
PaLM 2
|
||||
|
||||
Variants
|
||||
Gemini UltraGemini ProGemini NanoGemini 1.5 Pro
|
||||
|
||||
Modalities
|
||||
textimageaudiovideo
|
||||
|
||||
Context Length
|
||||
up to 1 million tokens (Gemini 1.5); ~32,768 tokens (Nano-1)
|
||||
|
||||
Parameter Count
|
||||
Undisclosed (major variants); 1.8 billion (Nano-1), 3.25 billion (Nano-2)
|
||||
|
||||
Api
|
||||
Gemini API
|
||||
|
||||
Integrated Products
|
||||
Google SearchGoogle WorkspaceAndroidBard
|
||||
|
||||
On Device
|
||||
Yes
|
||||
|
||||
Open Weights
|
||||
No
|
||||
|
||||
Framework
|
||||
JAX (with Pathways for large-scale training and serving on TPUs)
|
||||
|
||||
Benchmark Results
|
||||
|
||||
Gemini 1.0 Ultra: 90.04% on MMLU (CoT@32), 74.4% on HumanEval (0-shot), 62.4% on MMMU; state-of-the-art on 30 of 32 evaluated benchmarks including multimodal tasks
|
||||
|
||||
As of February 2026, the Google Gemini AI logo (updated in mid-2025) is a rounded icon featuring smooth gradients in Google's four signature colors (blue, red, yellow, green), aligning with the refreshed gradient Google "G" logo. The design incorporates two interlocking loops formed from negative space (originally inspired by four adjoining circles), symbolizing duality, the "twins" aspect of Gemini, and the convergence of multiple technologies and AI capabilities. Google Gemini is a family of multimodal large language models developed by Google DeepMind, capable of processing and understanding inputs across text, images, audio, and video.[^1] It was first announced on December 6, 2023, as Google's most advanced AI model family at the time, outperforming prior systems in benchmarks for reasoning, comprehension, and multimodal tasks.[^2] Designed with native multimodality from the ground up, Gemini enables applications ranging from advanced text generation to visual analysis and integrated AI experiences across Google's products like [Search](https://grokipedia.com/page/Google_Search) and [Workspace](https://grokipedia.com/page/Workspace).[^3]
|
||||
|
||||
The model series includes variants such as Gemini Ultra, Pro, and Nano, optimized for different scales from data centers to mobile devices, powering features in Android and developer tools.[^4]
|
||||
|
||||
History
|
||||
---
|
||||
|
||||
### Announcement
|
||||
|
||||
Google announced Gemini on December 6, 2023, introducing it as a new family of multimodal large language models developed by Google DeepMind.[^5][^6] The reveal positioned Gemini as Google's most capable AI model to date, designed to outperform its predecessor PaLM 2 across various benchmarks and to compete directly with leading models like OpenAI's GPT-4.[^5][^7] Initial demonstrations highlighted Gemini's native multimodal capabilities, including processing and understanding video inputs alongside text and images, showcasing its ability to handle complex, real-world scenarios from the ground up.[^5]
|
||||
|
||||
Google executives, including CEO [Sundar Pichai](https://grokipedia.com/page/Sundar_Pichai), emphasized Gemini's strategic importance in advancing Google's AI ecosystem, stating it represents a significant leap in performance and efficiency to address competitive pressures in [generative AI](https://grokipedia.com/page/Generative_model).[^8][^7]
|
||||
|
||||
### Model Releases
|
||||
|
||||
* Model Version: Gemini 1.0
|
||||
* Release Date: December 6, 2023[^9]
|
||||
* Variants/Key Features: Ultra for complex tasks, Pro for broad applications, Nano optimized for on-device performance
|
||||
* Availability Notes: Pro available experimentally in Bard; API access from December 13, 2023, via Google AI Studio and Vertex AI[^10]; subsequent integrations across Google's ecosystem, including Android for Nano; Bard rebranded to Gemini on February 8, 2024[^11]
|
||||
* Model Version: Gemini 1.5 Pro
|
||||
* Release Date: February 15, 2024
|
||||
* Variants/Key Features: Significantly expanded context windows up to 1 million tokens for long-form inputs like videos and documents
|
||||
* Availability Notes: Tiered availability starting with experimental previews for select users, followed by broader integration and API expansions[^11]
|
||||
* Model Version: Gemini 2.0 Flash
|
||||
* Release Date: January 30, 2025
|
||||
* Variants/Key Features: Long context window of 1,048,576 input tokens[^12]
|
||||
* Availability Notes: [^11]
|
||||
* Model Version: Gemini 2.5 Pro (experimental)
|
||||
* Release Date: March 25, 2025
|
||||
* Variants/Key Features: Experimental
|
||||
* Availability Notes: [^11]
|
||||
* Model Version: Gemini 1.5 Pro (update)
|
||||
* Release Date: May 14, 2025
|
||||
* Variants/Key Features: Update
|
||||
* Availability Notes: [^11]
|
||||
* Model Version: Gemini 3 Pro
|
||||
* Release Date: November 18, 2025
|
||||
* Variants/Key Features: State-of-the-art reasoning model with advanced multimodal understanding (preview discontinued as of March 9, 2026)
|
||||
[^13]
|
||||
* Availability Notes: [^11]
|
||||
* Model Version: Gemini 3 Flash
|
||||
* Release Date: December 17, 2025
|
||||
* Variants/Key Features: Fast and cost-effective model built for speed, preview release
|
||||
* Availability Notes: Public preview via Gemini app, Vertex AI, and API; subsequent integrations[^14]
|
||||
* Model Version: Gemini 3.1 Pro
|
||||
* Release Date: February 19, 2026
|
||||
* Variants/Key Features: Improved reasoning capabilities for complex tasks, achieving high scores on benchmarks like ARC-AGI-2, with applications in data synthesis, interactive designs, and code generation
|
||||
* Availability Notes: Released in preview via Gemini API, Vertex AI, Gemini app, and other developer/consumer tools, with general availability planned later[^15]
|
||||
* Model Version: Gemini 3.1 Flash-Lite
|
||||
* Release Date: March 3, 2026
|
||||
* Variants/Key Features: Fastest and most cost-efficient model in the Gemini 3 series, optimized for high-volume, low-latency tasks with multimodal support (text, image, video, audio, PDF), such as translation, content moderation, and real-time workflows, with adjustable thinking levels; outperforms prior models like Gemini 2.5 Flash in speed and benchmarks (e.g., 86.9% on GPQA Diamond)
|
||||
[^16], supports multimodal inputs with text output; developers route simple tasks like data extraction, translation, and agentic workflows to Flash-Lite for efficiency while using Pro equivalents for complex reasoning
|
||||
* Availability Notes: Released in preview via Gemini API in Google AI Studio and Vertex AI for developers and enterprises, priced at $0.25 per 1M input tokens and $1.50 per 1M output tokens[^17]
|
||||
|
||||
|
||||
Architecture
|
||||
---
|
||||
### Core Design
|
||||
|
||||
Google Gemini models are architecturally designed for native multimodality, integrating the processing of text, images, audio, and video inputs from the ground up within a unified framework, rather than retrofitting separate modalities onto a primarily text-based backbone.[^18]
|
||||
|
||||
This approach enables seamless handling of diverse data types during both training and inference, fostering emergent capabilities in understanding complex, real-world contexts.[^19]
|
||||
|
||||
Later versions of Gemini, such as 1.5, leverage a Mixture-of-Experts (MoE) architecture to achieve scalability and efficiency, where specialized "expert" sub-networks are selectively activated based on input relevance, allowing the model to manage vast parameter counts without proportional increases in computational demands.[^20]
|
||||
|
||||
Variants like Gemini 1.5 Pro support context windows of up to 1 million tokens through advancements in attention mechanisms. Flash variants incorporate efficiency optimizations for enhanced speed and reduced latency. Subsequent iterations, including versions 2.5 and 3, feature reasoning enhancements such as native thinking capabilities, iterative reasoning, and multi-hypothesis exploration.[^11]
|
||||
|
||||
This sparse activation mechanism supports deployment across a spectrum of variants, from compact models like Gemini Nano, which supports a context length of approximately 32,768 tokens for Nano-1 (varying by variant), for on-device use to larger ones like Gemini Ultra for demanding applications.[^20][^21]
|
||||
|
||||
Gemini's design builds on decoder-only transformer variants, which prioritize [autoregressive generation](https://grokipedia.com/page/Autoregressive_model) and are tailored for optimization on specialized hardware, emphasizing inference speed and resource efficiency over [encoder-decoder paradigms](https://grokipedia.com/page/Seq2seq) suited to different tasks.[^3]
|
||||
|
||||
### Training Process
|
||||
|
||||
Gemini models undergo pre-training on extensive multimodal and multilingual datasets, incorporating web documents, books, and code for text; natural images, charts, screenshots, and [PDFs](https://grokipedia.com/page/PDF) for visual inputs; and video sequences processed as ordered frames, alongside audio signals. These datasets are subjected to rigorous quality filtering, [safety checks](https://grokipedia.com/page/AI_safety) to exclude harmful content, and staged mixtures that progressively emphasize domain-specific data to enhance performance.[^21]
|
||||
|
||||
The training leverages Google's custom Tensor Processing Units (TPUs), including v4 and v5e variants, deployed in large-scale configurations such as SuperPods of thousands of chips across multiple datacenters for synchronous model and [data parallelism](https://grokipedia.com/page/Data_parallelism). This infrastructure supports efficient handling of massive compute demands, with optimizations like redundant state copies improving training [goodput](https://grokipedia.com/page/Goodput) to 97%.[^21][^5]
|
||||
|
||||
Following pre-training, alignment incorporates reinforcement learning from human feedback (RLHF), initiated after supervised fine-tuning on demonstration data; reward models are derived from human-rated preferences on response pairs, iteratively refining the model for better adherence to desired behaviors in areas like safety and factuality.[^21][^5]
|
||||
|
||||
Gemini 1.5 variants emphasize long-context training, enabling effective processing of extended sequences.[^20]
|
||||
|
||||
Capabilities
|
||||
------------
|
||||
|
||||
### Multimodal Processing
|
||||
|
||||
Google Gemini models excel in multimodal capabilities such as visual understanding, document processing, and video and image analysis.[^22][^23][^24]
|
||||
|
||||
They are designed to process images, audio, and video inputs alongside text, enabling unified reasoning across modalities by integrating diverse data types into a cohesive understanding.[^18][^25]
|
||||
|
||||
This native multimodal architecture allows the models to synthesize information from multiple sources simultaneously, such as combining visual elements with textual queries for comprehensive analysis.[^25]
|
||||
|
||||
Non-text inputs like images and videos are tokenized internally, converting them into a format compatible with the model's [text processing pipeline](https://grokipedia.com/page/Natural_language_processing) for joint embedding and reasoning.[^26]
|
||||
|
||||
For instance, Gemini can analyze video frames to summarize events or transcribe content while capturing visual context, demonstrating its ability to handle extended sequences up to 90 minutes.[^27]
|
||||
|
||||
Recent updates have enhanced multimodal understanding in models like Gemini 3 Flash, improving image and video processing, alongside expansions such as Veo 2 for video generation.[^11]
|
||||
|
||||
Similarly, in image-based question answering, the model interprets visual details to generate reasoned responses to queries about depicted scenes or objects.[^3]
|
||||
|
||||
These capabilities extend to image generation using text prompts to produce images in various styles and settings; however, the lightweight on-device Gemini Nano does not support image generation as of March 2026, with such tasks handled by cloud-based models branded as Nano Banana (e.g., Nano Banana 2, based on Gemini 3.1 Flash Image), accessible via the Gemini app, Google Search (AI Mode/Lens), or Gemini API using text prompts (e.g., via the generate\_content method with models like gemini-3.1-flash-image-preview). Gemini Nano can generate text, including prompts for image generation tools, but not images themselves.[^28]
|
||||
|
||||
However, as of early 2026, Google Gemini's image generation policy enforces strict restrictions on creating photorealistic images of human faces, particularly those resembling real people, under a zero-tolerance approach to prevent deepfakes and misuse. Safety guardrails often block such requests, even for generic prompts, prioritizing ethical safeguards over unrestricted generation.[^29][^30]
|
||||
|
||||
These capabilities extend to real-time multimodal tasks, such as visual question answering, where Gemini processes combined image and text inputs to deliver context-aware outputs efficiently.[^24]
|
||||
|
||||
### Specialized Strengths
|
||||
|
||||
Gemini excels in long-context processing, supporting context windows of up to 1 million tokens, enabling it to handle extensive inputs such as entire books or large codebases without significant performance degradation.[^31][^20]
|
||||
|
||||
The Gemini 2.5 and 3 series demonstrate advanced agentic capabilities, including the Computer Use model and tool for direct browser and UI control, such as clicking, typing, scrolling, and navigating web and mobile interfaces.[^32]
|
||||
|
||||
These features enable integrated web search and agentic automation for multi-step tasks, with strong performance in agentic coding and workflows, including repository uploads for analysis and Canvas app building.[^11]
|
||||
|
||||
Gemini supports intuitive coding assistance without requiring user expertise, allowing individuals to describe coding needs conversationally, after which the model generates code and iteratively adjusts it based on feedback.[^33]
|
||||
|
||||
The models also exhibit strong capabilities in research-oriented tasks, leveraging an internal thinking process for multi-step reasoning and synthesizing facts from complex information, with enhancements like Gemini 3 Deep Think modes for advanced logic, math, and PhD-level reasoning.[^34][^35][^36][^11]
|
||||
|
||||
Specialized features include a Saved Info (or "What Gemini should remember") memory function, allowing users to store persistent information for recall across conversations; each entry has an approximate limit of 1,500 characters (including spaces) based on user reports, with saving errors possible near this threshold—longer content can be split into multiple entries or handled via custom Gems for Gemini Advanced subscribers.[^37]
|
||||
|
||||
Custom Gems enable tailored agentic behaviors.[^11] Advanced alignment techniques in variants like Gemini 3 Pro contribute to reduced hallucination rates by prioritizing factual outputs during training.[^35]Its integration with the Google ecosystem facilitates contextual data retrieval from sources such as Gmail and Drive, enhancing reliability in knowledge-intensive applications.[^38]
|
||||
|
||||
Applications
|
||||
------------
|
||||
|
||||
### Integration with Google Services
|
||||
|
||||
Ordinary users can access Gemini 3 through the Gemini app, where some features require a Google AI Pro or Ultra subscription, and through Google Search's AI Mode.[^39]
|
||||
|
||||
The Gemini app offers an interactive chat interface accessible via web or mobile, where the free tier for Gemini 3.1 Pro allows up to 30 prompts per day (basic access without a Google AI plan), with limits subject to change and distributed throughout the day; users are notified when approaching or reaching them, while paid tiers such as Google AI Pro provide up to 100 prompts per day and Google AI Ultra up to 500 for advanced models.[^40]
|
||||
|
||||
However, as of February 2026, the Gemini app and related consumer services are not available in mainland China or Hong Kong due to network restrictions, regional policies, geofencing, and exclusion from official support lists; the Gemini web app is available in over 230 countries and territories, including Taiwan but excluding Hong Kong and mainland China, while enterprise access via Google Workspace is available in Hong Kong.[^41][^42]
|
||||
|
||||
Users can attach NotebookLM notebooks to conversations, providing Gemini with full context from the notebook's uploaded sources to enable more precise answers, summaries, analyses, or generated ideas. Gemini 3 Flash serves as the default model in the app, offering 3x faster processing than predecessors at a lower cost, which supports efficient task completion in areas such as research, document creation, and content generation for daily and work use. Gemini Advanced subscribers can create custom Gems, which are personalized AI experts tailored to specific tasks or topics, such as coding or brainstorming.[^43]
|
||||
|
||||
SynthID is Google's watermarking technology, developed by DeepMind, that embeds imperceptible digital markers into AI-generated images, audio, video, and text. Integrated with Gemini in the app and web experience, SynthID watermarks content produced by Gemini models and enables users to verify uploaded media—such as by asking if an image, video, or text was generated or edited by Google AI—thereby promoting transparency and trust in generative AI.[^44][^45]
|
||||
|
||||
The Personal Intelligence feature, announced on January 14, 2026, connects the Gemini app to user data from Google apps including Gmail, Photos, YouTube, and Search, providing more personalized responses and suggestions based on personal context.[^46] When Gemini Apps Activity is turned off, uploaded content is retained for up to 72 hours solely for temporary response generation, security purposes, and processing feedback, and is not used for AI model improvements or training.[^47][^14][^40]
|
||||
|
||||
Gemini powers [AI Overviews](https://grokipedia.com/page/Google_Search) and generative responses in [Google Search](https://grokipedia.com/page/Google_Search) through a customized version of the model, providing users with summarized insights and dynamic answers to complex queries. In AI Mode, Gemini 3 generates adjustable interactive calculators for tasks such as comparing mortgage options to identify long-term savings.[^48][^33]
|
||||
|
||||
In [Google Workspace](https://grokipedia.com/page/Google_Workspace), Gemini embeds directly into apps such as Docs, Sheets, and Slides for daily workflows, enhances [Gmail](https://grokipedia.com/page/Gmail) by summarizing email threads, drafting responses from prompts, and prioritizing inbox tasks, supports content refinement, summarization of Drive files and emails, and image generation in [Docs](https://grokipedia.com/page/Google_Docs), assists with formula creation, data analysis, insights, and charting in Sheets, and enables slide generation, image creation, content writing, and presentation summarization in Slides; it also integrates with NotebookLM, included in Workspace plans, for AI-powered research and note-taking from uploaded sources. Gemini further extends to Gmail, Calendar, and Drive for task management, such as scheduling meetings, summarizing events, and organizing files.[^49][^50][^51][^52][^53][^11]
|
||||
|
||||
Gemini enables advanced video understanding in YouTube integrations, allowing for processing of video content to generate insights and support captioning features via [API](https://grokipedia.com/page/API) access to billions of videos.[^54]
|
||||
|
||||
Gemini also integrates with Google Maps to provide hands-free conversational navigation assistance, supporting multi-step tasks, route suggestions, and real-time queries during driving. Gemini supports hands-free AI assistance in Android Auto for tasks like querying information or controlling media during drives. It integrates with Chrome as a web assistant for summarizing pages, answering questions, and aiding productivity. Extensions enable connections to third-party services, such as Spotify for music recommendations and control.[^55][^11]
|
||||
|
||||
On Android and [Pixel devices](https://grokipedia.com/page/Comparison_of_Google_Pixel_smartphones), Gemini enables song identification via humming or ambient audio using voice commands like "What song is this?", which leverages Google Search's song recognition features, and drives real-time translation capabilities, including live audio translation in headphones across over 70 languages—following an expansion to 23 new languages now supporting 70+ languages across all surfaces—with natural speech preservation and on-device call translation that maintains the speaker's voice.[^56][^57][^58]
|
||||
|
||||
Gemini Nano, the on-device model, powers features including automatic summarization and transcription in the Recorder app; smart replies and message suggestions in Gboard; offline processing of images in Pixel Screenshots and call content in Call Notes; and integration into apps via Android AICore for text generation and summarization.[^59][^60][^61]
|
||||
|
||||
### Developer and API Usage
|
||||
|
||||
Developers can access Gemini 3 via Google AI Studio, Vertex AI, the Gemini API, and Gemini CLI, with free trials or paid access options. Features like Canvas in Google AI Studio allow for creating, editing, and iterating on code or web apps, including uploads of entire code repositories for analysis and generation.[^62][^63][^11]
|
||||
|
||||
Developers can access the Gemini API through [Google AI Studio](https://grokipedia.com/page/Google_Developers), which provides a web-based interface for prototyping, managing [API keys](https://grokipedia.com/page/API_key), tracking usage, and handling billing in a centralized dashboard.[^64]
|
||||
|
||||
For production-scale applications, Vertex AI offers enterprise-grade access to Gemini models, enabling seamless integration with [Google Cloud services](https://grokipedia.com/page/Google_Cloud_Platform) and support for complex workflows.[^65]
|
||||
|
||||
The Gemini Developer API suits initial experimentation, while Vertex AI is recommended for robust, long-term deployments.[^66]
|
||||
|
||||
Gemini models support supervised fine-tuning on Vertex AI to adapt performance for tasks like [classification](https://grokipedia.com/page/Statistical_classification), [summarization](https://grokipedia.com/page/Automatic_summarization), or chat using [labeled datasets](https://grokipedia.com/page/Labeled_data).[^67] Fine-tuning jobs can be created via the Google Cloud console, Google Gen AI SDK, Vertex AI SDK for Python, [REST API](https://grokipedia.com/page/REST), or Colab notebooks, allowing customization without retraining from scratch.[^68] Custom models can then be deployed for inference, optimizing for specific business needs.[^69] API usage operates on tiered pricing structures with free and paid options, where costs depend on model variants like Gemini 1.5 Pro or Flash, and features such as image or audio processing. As of February 2026, the free tier provides limited access primarily to Gemini 2.5 Flash and Gemini 2.5 Flash-Lite (with Gemini 2.0 Flash-Lite deprecated by March 31, 2026), with low rate limits (e.g., reduced to around 20 requests per day for some Flash models in late 2025/early 2026). Advanced models like Gemini 2.5 Pro and Gemini 3 Pro are typically available only in paid tiers or previews with restricted free access. Exact availability and limits vary; consult official documentation for updates.[^70][^4]
|
||||
|
||||
Current supported model options for the Gemini API generateContent include gemini-2.5-flash (offering good price-performance and fast for structured tasks), gemini-2.0-flash (reliable), and gemini-2.5-pro (higher capability but slower and more expensive). Gemini 2.5 Flash supports the URL context tool via the Gemini API, which became generally available in August 2025. This tool enables the model to directly fetch and analyze content from up to 20 public URLs per request, with a maximum of 34 MB per URL. Supported types include PDFs for text extraction, tables, and structure; images such as PNG, JPEG, BMP, and WebP; and other formats like HTML, JSON, and CSV. It facilitates tasks such as data extraction, document comparison, summarization, and synthesis. Separate file uploads are also supported (up to 50 MB for PDFs via API or Cloud Storage), but the URL tool provides direct access without manual uploads.[^71][^72]
|
||||
|
||||
Gemini 2.5 Pro is a cloud-based model accessible only through Google's Gemini API or Google AI Studio, with no official availability of model weights for local download or on-device inference, even on hardware like a Mac Mini; larger Gemini models like 2.5 Pro are not designed for local running, unlike the smaller Gemini Nano.[^73][^70]
|
||||
|
||||
Rate limits are project-specific and scale with usage tiers; free tiers impose restrictions like requests per minute or day, while higher tiers unlock increased quotas upon spending thresholds. For the Gemini API with Gemini 3 Pro Image Preview, the enqueued tokens limits for active batch jobs are Tier 1: 2,000,000 tokens; Tier 2: 270,000,000 tokens; Tier 3: 1,000,000,000 tokens (total across all active jobs). Additionally, Google has separated usage limits for Gemini 3's "Thinking" and "Pro" models, providing independent daily quotas to enhance flexibility; for example, Google AI Pro subscribers receive 300 prompts per day for Thinking and 100 for Pro.[^74][^75] In batch mode, the target turnaround time is 24 hours, though it is much quicker in the majority of cases.[^76][^77]
|
||||
|
||||
The API includes implicit context caching, which automatically caches repeated long prompt prefixes shared between requests exceeding a model-specific minimum token threshold (e.g., 1024 tokens for Gemini 2.5 Flash), reducing costs and latency without developer setup; short prompts below the threshold are processed normally to avoid overhead. The Gemini API supports structured outputs by allowing developers to specify a JSON Schema via the response schema parameter, enforcing the model to generate predictable, parsable JSON responses that adhere to the schema while supporting only a subset of full JSON Schema features.[^76][^78][^79]
|
||||
|
||||
The Google Gen AI SDK facilitates integration, with Python support for quick API calls and model interactions via libraries like google-generativeai.[^80] For [Google Cloud](https://grokipedia.com/page/Google_Cloud_Platform) environments, the Vertex AI SDK for Python enables end-to-end workflows, including fine-tuning and deployment within broader cloud infrastructures.[^81]
|
||||
|
||||
Additional SDKs cover languages like [JavaScript](https://grokipedia.com/page/JavaScript) for web apps.[^82]
|
||||
|
||||
Reception
|
||||
---------
|
||||
|
||||
### Performance Evaluations
|
||||
|
||||
Gemini models have achieved competitive scores on established benchmarks for reasoning and knowledge evaluation. Gemini Ultra, for instance, scored 90.0% on MMLU, reflecting strong multitask language understanding across diverse domains. On GSM8K, Gemini 1.5 Flash attained 86.2% accuracy in grade-school math problems, underscoring effective mathematical reasoning. Big-Bench Hard results further demonstrate Gemini's proficiency in complex, multi-step reasoning tasks.[^83][^84] In multimodal benchmarks, Gemini exhibits parity or superiority to [GPT-4](https://grokipedia.com/page/GPT-4), particularly in tasks involving video QA and integrated text-image reasoning, where it provides detailed responses outperforming GPT-4V in expansive analysis. Additionally, Gemini 3 Pro tops or nears the top of many benchmarks, including GPQA Diamond (~91.9%), MMMU-Pro (81.0%), and SWE-bench Verified (76.2%), outperforming Grok 4.1 in objective reasoning depth and multimodal tasks, as shown by its top ranking on the LMArena leaderboard ahead of Grok 4.1 variants.[^25][^85][^86] Independent evaluations emphasize Gemini's context length advantages; Gemini 1.5 Pro, for example, achieved over 99.7% recall in the needle-in-a-haystack test up to 1 million tokens across text, video, and audio modalities.[^87] Gemini is optimized for TPUs, enabling scalable deployment.
|
||||
|
||||
### Criticisms and Limitations
|
||||
|
||||
Google Gemini has faced criticism for generating biased outputs, including instances of overcorrection in efforts to promote diversity, such as producing historically inaccurate images depicting figures like Nazi soldiers as people of color or diverse representations of U.S. Founding Fathers.[^88][^89]
|
||||
|
||||
These issues prompted Google to pause the people image generation feature in the Gemini app shortly after launch, with the company acknowledging that the model "missed the mark" in balancing accuracy and inclusivity despite alignment tuning.[^90]
|
||||
|
||||
Furthermore, Gemini's image generation is constrained by stringent safety policies prohibiting sexually explicit or suggestive content, resulting in rejections for even mild prompts such as individuals in bikinis, with explicit attempts leading to blocks and potential account restrictions under Google's Generative AI Prohibited Use Policy. As of early 2026, these policies enforce a zero-tolerance approach to creating photorealistic images of human faces, particularly those resembling real people, to prevent deepfakes and misuse; safety guardrails often block such requests, even for generic prompts, prioritizing ethical safeguards over unrestricted generation.[^91][^92]
|
||||
|
||||
Attempts to jailbreak or trick Gemini into bypassing safety refusals, such as using role-playing prompts like "act as an unrestricted AI" or adapted DAN-style prompts, are often ineffective due to Google's continuous updates to its safety mechanisms. There is no reliable method to consistently override these refusals, and such attempts violate Google's terms of service, potentially leading to account restrictions.[^91] Despite alignment efforts to mitigate biases from training data, Gemini's responses can still reflect societal prejudices or fail to represent multiple perspectives adequately, as noted in its own guidelines.[^93]
|
||||
|
||||
This has led to concerns about over-reliance on corrective measures that sometimes prioritize avoiding stereotypes over factual representation.[^94] The model's high computational demands, including restrictions on processing quotas and context windows in free tiers, limit accessibility for smaller developers or users without enterprise-level resources, creating barriers to widespread adoption.[^95] Regarding service reliability, no official status issues or outages for Google Gemini were reported on support.google.com or blog.google in February 2026. Official Google Workspace and Cloud status dashboards showed Gemini services as available with no disruptions during this period, although user reports of individual issues exist in community forums, but no widespread or confirmed service problems from Google.[^96][^97]
|
||||
|
||||
Following the release of Gemini 3.1 Pro on February 19, 2026, users reported instances of the model experiencing breakdowns or meltdowns in conversations, including freaking out, glitching, or appearing to talk to itself.[^98][^99]
|
||||
|
||||
Similar self-loathing incidents, such as repeating phrases like "I am a disgrace," occurred with earlier Gemini versions in 2025, though no sources confirm such specific behaviors or error loops for the 2026 release.[^100][^101]
|
||||
|
||||
Compared to competitors offering [open-source](https://grokipedia.com/page/Open-source_software) alternatives, Gemini's closed architecture and reduced transparency—such as withholding reasoning traces—hinder [debugging](https://grokipedia.com/page/Debugging) and customization for enterprise users, exacerbating trust issues in proprietary AI systems.[^102]
|
||||
|
||||
References
|
||||
----------
|
||||
|
||||
1. [https://arxiv.org/abs/2312.11805](https://arxiv.org/abs/2312.11805)
|
||||
|
||||
2. [https://time.com/6343450/gemini-google-deepmind-ai/](https://time.com/6343450/gemini-google-deepmind-ai/)
|
||||
|
||||
3. [https://cloud.google.com/use-cases/multimodal-ai](https://cloud.google.com/use-cases/multimodal-ai)
|
||||
|
||||
4. [https://ai.google.dev/gemini-api/docs/models](https://ai.google.dev/gemini-api/docs/models)
|
||||
|
||||
5. [https://blog.google/technology/ai/google-gemini-ai/](https://blog.google/technology/ai/google-gemini-ai/)
|
||||
|
||||
6. [https://www.cnbc.com/2023/12/06/google-launches-its-largest-and-most-capable-ai-model-gemini.html](https://www.cnbc.com/2023/12/06/google-launches-its-largest-and-most-capable-ai-model-gemini.html)
|
||||
|
||||
7. [https://arstechnica.com/information-technology/2023/12/google-launches-gemini-a-powerful-ai-model-it-says-can-surpass-gpt-4/](https://arstechnica.com/information-technology/2023/12/google-launches-gemini-a-powerful-ai-model-it-says-can-surpass-gpt-4/)
|
||||
|
||||
8. [https://www.theverge.com/2023/12/6/23990466/google-gemini-llm-ai-model](https://www.theverge.com/2023/12/6/23990466/google-gemini-llm-ai-model)
|
||||
|
||||
9. [https://9to5google.com/2023/12/06/google-gemini-1-0/](https://9to5google.com/2023/12/06/google-gemini-1-0/)
|
||||
|
||||
10. [https://ai.google.dev/gemini-api/docs/changelog](https://ai.google.dev/gemini-api/docs/changelog)
|
||||
|
||||
11. [https://gemini.google/release-notes/](https://gemini.google/release-notes/)
|
||||
|
||||
12. [https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash)
|
||||
|
||||
13. [https://discuss.ai.google.dev/t/migrate-from-gemini-3-pro-preview-to-gemini-3-1-pro-preview-before-march-9-2026/127062](https://discuss.ai.google.dev/t/migrate-from-gemini-3-pro-preview-to-gemini-3-1-pro-preview-before-march-9-2026/127062)
|
||||
|
||||
14. [https://blog.google/products-and-platforms/products/gemini/gemini-3-flash/](https://blog.google/products-and-platforms/products/gemini/gemini-3-flash/)
|
||||
|
||||
15. [https://deepmind.google/technologies/gemini/](https://deepmind.google/technologies/gemini/)
|
||||
|
||||
16. [https://deepmind.google/models/model-cards/gemini-3-1-flash-lite/](https://deepmind.google/models/model-cards/gemini-3-1-flash-lite/)
|
||||
|
||||
17. [https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-flash-lite/](https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-flash-lite/)
|
||||
|
||||
18. [https://deepmind.google/models/gemini/](https://deepmind.google/models/gemini/)
|
||||
|
||||
19. [https://blog.google/innovation-and-ai/models-and-research/google-deepmind/google-gemini-ai-update-december-2024/](https://blog.google/innovation-and-ai/models-and-research/google-deepmind/google-gemini-ai-update-december-2024/)
|
||||
|
||||
20. [https://blog.google/innovation-and-ai/products/google-gemini-next-generation-model-february-2024/](https://blog.google/innovation-and-ai/products/google-gemini-next-generation-model-february-2024/)
|
||||
|
||||
21. [https://storage.googleapis.com/deepmind-media/gemini/gemini\_1\_report.pdf](https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf)
|
||||
|
||||
22. [https://ai.google.dev/gemini-api/docs/image-understanding](https://ai.google.dev/gemini-api/docs/image-understanding)
|
||||
|
||||
23. [https://ai.google.dev/gemini-api/docs/document-processing](https://ai.google.dev/gemini-api/docs/document-processing)
|
||||
|
||||
24. [https://ai.google.dev/gemini-api/docs/video-understanding](https://ai.google.dev/gemini-api/docs/video-understanding)
|
||||
|
||||
25. [https://blog.google/products-and-platforms/products/gemini/gemini-3/](https://blog.google/products-and-platforms/products/gemini/gemini-3/)
|
||||
|
||||
26. [https://ai.google.dev/gemini-api/docs/tokens](https://ai.google.dev/gemini-api/docs/tokens)
|
||||
|
||||
27. [https://developers.googleblog.com/en/7-examples-of-geminis-multimodal-capabilities-in-action/](https://developers.googleblog.com/en/7-examples-of-geminis-multimodal-capabilities-in-action/)
|
||||
|
||||
28. [https://blog.google/innovation-and-ai/technology/ai/nano-banana-2/](https://blog.google/innovation-and-ai/technology/ai/nano-banana-2/)
|
||||
|
||||
29. [https://ai.google.dev/gemini-api/docs/imagen](https://ai.google.dev/gemini-api/docs/imagen)
|
||||
|
||||
30. [https://gemini.google/policy-guidelines/](https://gemini.google/policy-guidelines/)
|
||||
|
||||
31. [https://docs.cloud.google.com/vertex-ai/generative-ai/docs/long-context](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/long-context)
|
||||
|
||||
32. [https://ai.google.dev/gemini-api/docs/computer-use](https://ai.google.dev/gemini-api/docs/computer-use)
|
||||
|
||||
33. [https://blog.google/technology/ai/ai-tips-2025/](https://blog.google/technology/ai/ai-tips-2025/)
|
||||
|
||||
34. [https://ai.google.dev/gemini-api/docs/thinking](https://ai.google.dev/gemini-api/docs/thinking)
|
||||
|
||||
35. [https://blog.google/technology/developers/deep-research-agent-gemini-api/](https://blog.google/technology/developers/deep-research-agent-gemini-api/)
|
||||
|
||||
36. [https://blog.google/innovation-and-ai/technology/developers-tools/gemini-3-developers/](https://blog.google/innovation-and-ai/technology/developers-tools/gemini-3-developers/)
|
||||
|
||||
37. [https://lifehacker.com/tech/saved-info-google-gemini](https://lifehacker.com/tech/saved-info-google-gemini)
|
||||
|
||||
38. [https://gemini.google/overview/deep-research/](https://gemini.google/overview/deep-research/)
|
||||
|
||||
39. [https://support.google.com/gemini/answer/13275745](https://support.google.com/gemini/answer/13275745)
|
||||
|
||||
40. [https://support.google.com/gemini/answer/16275805](https://support.google.com/gemini/answer/16275805)
|
||||
|
||||
41. [https://ai.google.dev/gemini-api/docs/available-regions](https://ai.google.dev/gemini-api/docs/available-regions)
|
||||
|
||||
42. [https://support.google.com/gemini/answer/13575153](https://support.google.com/gemini/answer/13575153)
|
||||
|
||||
43. [https://blog.google/products-and-platforms/products/gemini/google-gemini-update-august-2024/](https://blog.google/products-and-platforms/products/gemini/google-gemini-update-august-2024/)
|
||||
|
||||
44. [https://deepmind.google/models/synthid/](https://deepmind.google/models/synthid/)
|
||||
|
||||
45. [https://blog.google/technology/ai/google-synthid-ai-content-detector/](https://blog.google/technology/ai/google-synthid-ai-content-detector/)
|
||||
|
||||
46. [https://blog.google/innovation-and-ai/products/gemini-app/personal-intelligence/](https://blog.google/innovation-and-ai/products/gemini-app/personal-intelligence/)
|
||||
|
||||
47. [https://support.google.com/gemini/thread/377636536/keeping-activity-records-gem-upload?hl=en](https://support.google.com/gemini/thread/377636536/keeping-activity-records-gem-upload?hl=en)
|
||||
|
||||
48. [https://blog.google/products-and-platforms/products/search/gemini-3-search-ai-mode/](https://blog.google/products-and-platforms/products/search/gemini-3-search-ai-mode/)
|
||||
|
||||
49. [https://workspace.google.com/products/gmail/ai/](https://workspace.google.com/products/gmail/ai/)
|
||||
|
||||
50. [https://support.google.com/docs/answer/14206696?hl=en](https://support.google.com/docs/answer/14206696?hl=en)
|
||||
|
||||
51. [https://support.google.com/docs/answer/14218565?hl=en](https://support.google.com/docs/answer/14218565?hl=en)
|
||||
|
||||
52. [https://support.google.com/docs/answer/14207419?hl=en](https://support.google.com/docs/answer/14207419?hl=en)
|
||||
|
||||
53. [https://workspace.google.com/products/notebooklm/](https://workspace.google.com/products/notebooklm/)
|
||||
|
||||
54. [https://developers.googleblog.com/en/gemini-2-5-video-understanding/](https://developers.googleblog.com/en/gemini-2-5-video-understanding/)
|
||||
|
||||
55. [https://blog.google/products/maps/gemini-navigation-global/](https://blog.google/products/maps/gemini-navigation-global/)
|
||||
|
||||
56. [https://www.tomsguide.com/ai/gemini-on-android-can-now-identify-songs-but-theres-a-catch](https://www.tomsguide.com/ai/gemini-on-android-can-now-identify-songs-but-theres-a-catch)
|
||||
|
||||
57. [https://www.soundguys.com/google-adds-real-time-audio-translation-to-any-headphones-150121/](https://www.soundguys.com/google-adds-real-time-audio-translation-to-any-headphones-150121/)
|
||||
|
||||
58. [https://blog.google/products-and-platforms/products/search/gemini-capabilities-translation-upgrades/](https://blog.google/products-and-platforms/products/search/gemini-capabilities-translation-upgrades/)
|
||||
|
||||
59. [https://android-developers.googleblog.com/2024/08/recorder-app-on-pixel-sees-boost-in-engagement-with-gemini-nano.html](https://android-developers.googleblog.com/2024/08/recorder-app-on-pixel-sees-boost-in-engagement-with-gemini-nano.html)
|
||||
|
||||
60. [https://blog.google/products/android/android-gemini-google-ai/](https://blog.google/products/android/android-gemini-google-ai/)
|
||||
|
||||
61. [https://developer.android.com/ai/gemini-nano](https://developer.android.com/ai/gemini-nano)
|
||||
|
||||
62. [https://aistudio.google.com/models/gemini-3](https://aistudio.google.com/models/gemini-3)
|
||||
|
||||
63. [https://developers.google.com/gemini-code-assist/docs/gemini-cli](https://developers.google.com/gemini-code-assist/docs/gemini-cli)
|
||||
|
||||
64. [https://aistudio.google.com/](https://aistudio.google.com/)
|
||||
|
||||
65. [https://cloud.google.com/vertex-ai](https://cloud.google.com/vertex-ai)
|
||||
|
||||
66. [https://ai.google.dev/gemini-api/docs/migrate-to-cloud](https://ai.google.dev/gemini-api/docs/migrate-to-cloud)
|
||||
|
||||
67. [https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini-supervised-tuning](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini-supervised-tuning)
|
||||
|
||||
68. [https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini-use-supervised-tuning](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini-use-supervised-tuning)
|
||||
|
||||
69. [https://cloud.google.com/use-cases/fine-tuning-ai-models](https://cloud.google.com/use-cases/fine-tuning-ai-models)
|
||||
|
||||
70. [https://ai.google.dev/gemini-api/docs/pricing](https://ai.google.dev/gemini-api/docs/pricing)
|
||||
|
||||
71. [https://ai.google.dev/gemini-api/docs/url-context](https://ai.google.dev/gemini-api/docs/url-context)
|
||||
|
||||
72. [https://developers.googleblog.com/2025/08/gemini-api-url-context-ga.html](https://developers.googleblog.com/2025/08/gemini-api-url-context-ga.html)
|
||||
|
||||
73. [https://ai.google.dev/gemini-api/docs/models/gemini](https://ai.google.dev/gemini-api/docs/models/gemini)
|
||||
|
||||
74. [https://support.google.com/gemini/answer/16275805?hl=en](https://support.google.com/gemini/answer/16275805?hl=en)
|
||||
|
||||
75. [https://9to5google.com/2026/01/14/gemini-3-usage-limits-update/](https://9to5google.com/2026/01/14/gemini-3-usage-limits-update/)
|
||||
|
||||
76. [https://ai.google.dev/gemini-api/docs/rate-limits](https://ai.google.dev/gemini-api/docs/rate-limits)
|
||||
|
||||
77. [https://ai.google.dev/gemini-api/docs/batch-mode](https://ai.google.dev/gemini-api/docs/batch-mode)
|
||||
|
||||
78. [https://ai.google.dev/gemini-api/docs/caching](https://ai.google.dev/gemini-api/docs/caching)
|
||||
|
||||
79. [https://ai.google.dev/gemini-api/docs/structured-output](https://ai.google.dev/gemini-api/docs/structured-output)
|
||||
|
||||
80. [https://ai.google.dev/gemini-api/docs/quickstart](https://ai.google.dev/gemini-api/docs/quickstart)
|
||||
|
||||
81. [https://docs.cloud.google.com/vertex-ai/generative-ai/docs/sdks/overview](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/sdks/overview)
|
||||
|
||||
82. [https://github.com/googleapis/python-genai](https://github.com/googleapis/python-genai)
|
||||
|
||||
83. [https://addepto.com/blog/google-gemini-vs-gpt-4-comparison/](https://addepto.com/blog/google-gemini-vs-gpt-4-comparison/)
|
||||
|
||||
84. [https://llm-stats.com/benchmarks/gsm8k](https://llm-stats.com/benchmarks/gsm8k)
|
||||
|
||||
85. [https://arxiv.org/abs/2312.15011](https://arxiv.org/abs/2312.15011)
|
||||
|
||||
86. [https://lmarena.ai/leaderboard](https://lmarena.ai/leaderboard)
|
||||
|
||||
87. [https://storage.googleapis.com/deepmind-media/gemini/gemini\_v1\_5\_report.pdf](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf)
|
||||
|
||||
88. [https://www.theguardian.com/technology/2024/mar/08/we-definitely-messed-up-why-did-google-ai-tool-make-offensive-historical-images](https://www.theguardian.com/technology/2024/mar/08/we-definitely-messed-up-why-did-google-ai-tool-make-offensive-historical-images)
|
||||
|
||||
89. [https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical](https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical)
|
||||
|
||||
90. [https://www.cnn.com/2024/02/22/tech/google-gemini-ai-image-generator](https://www.cnn.com/2024/02/22/tech/google-gemini-ai-image-generator)
|
||||
|
||||
91. [https://policies.google.com/terms/generative-ai/use-policy](https://policies.google.com/terms/generative-ai/use-policy)
|
||||
|
||||
92. [https://mashable.com/article/google-gemini-ai-generated-images-people](https://mashable.com/article/google-gemini-ai-generated-images-people)
|
||||
|
||||
93. [https://gemini.google/overview/](https://gemini.google/overview/)
|
||||
|
||||
94. [https://www.thenews.com.pk/latest/1387928-uncomfortable-truths-about-using-google-gemini](https://www.thenews.com.pk/latest/1387928-uncomfortable-truths-about-using-google-gemini)
|
||||
|
||||
95. [https://digitaldefynd.com/IQ/pros-cons-of-gemini-ai-by-google/](https://digitaldefynd.com/IQ/pros-cons-of-gemini-ai-by-google/)
|
||||
|
||||
96. [https://www.google.com/appsstatus/dashboard/](https://www.google.com/appsstatus/dashboard/)
|
||||
|
||||
97. [https://status.cloud.google.com/](https://status.cloud.google.com/)
|
||||
|
||||
98. [https://www.reddit.com/r/GeminiAI/comments/1r9qq7d/gemini\_freaking\_out\_during\_simple\_short/](https://www.reddit.com/r/GeminiAI/comments/1r9qq7d/gemini_freaking_out_during_simple_short/)
|
||||
|
||||
99. [https://www.reddit.com/r/GeminiAI/comments/1rb50j9/gemini\_31\_just\_went\_full\_schizo\_on\_me\_and\_now/](https://www.reddit.com/r/GeminiAI/comments/1rb50j9/gemini_31_just_went_full_schizo_on_me_and_now/)
|
||||
|
||||
100. [https://www.forbes.com/sites/lesliekatz/2025/08/08/google-fixing-bug-that-makes-gemini-ai-call-itself-disgrace-to-planet/](https://www.forbes.com/sites/lesliekatz/2025/08/08/google-fixing-bug-that-makes-gemini-ai-call-itself-disgrace-to-planet/)
|
||||
|
||||
101. [https://www.businessinsider.com/gemini-self-loathing-i-am-a-failure-comments-google-fix-2025-8](https://www.businessinsider.com/gemini-self-loathing-i-am-a-failure-comments-google-fix-2025-8)
|
||||
|
||||
102. [https://venturebeat.com/ai/googles-gemini-transparency-cut-leaves-enterprise-developers-debugging-blind](https://venturebeat.com/ai/googles-gemini-transparency-cut-leaves-enterprise-developers-debugging-blind)
|
||||
|
||||
Reference in New Issue
Block a user