L’Università di Stanford ha pubblicato il suo settimo rapporto sull’indice AI. Il rapporto, che è alla sua settima edizione, illustra i progressi tecnici nell’intelligenza artificiale, la percezione pubblica della tecnologia e le dinamiche geopolitiche che condizionano il suo sviluppo. Il rapporto di quest’anno , pubblicato dallo Stanford Institute for Human-Centered Artificial Intelligence (HAI) , contiene un capitolo ampliato sull’intelligenza artificiale responsabile e nuovi capitoli sull’intelligenza artificiale nella scienza e nella medicina, oltre alle consuete raccolte di ricerca e sviluppo, prestazioni tecniche, economia , istruzione, politica e governance, diversità e opinione pubblica. IEEE Spectrum è una rivista e un sito web gestiti dall’IEEE (Institute of Electrical and Electronics Engineers), che è la più grande associazione tecnico-professionale del mondo dedicata all’avanzamento della tecnologia per il beneficio dell’umanità. IEEE Spectrum si occupa di fornire informazioni e analisi su tendenze e sviluppi in ingegneria, scienza e tecnologa. Ecco cinque grafici che forse non avete visto.
Il boom di investimenti nell’ ai generativa
While corporate investment was down overall last year, investment in generative AI went through the roof. Nestor Maslej, editor-in-chief of this year’s report, tells Spectrum that the boom is indicative of a broader trend in 2023, as the world grappled with the new capabilities and risks of generative AI systems like ChatGPT and the image-generating DALL-E 2. “The story in the last year has been about people responding [to generative AI],” says Maslej, “whether it’s in policy, whether it’s in public opinion, or whether it’s in industry with a lot more investment.” Another chart in the report shows that most of that private investment in generative AI is happening in the United States.
Google sta dominando la corsa dei foundation model.
Foundation models are big multipurpose models—for example, OpenAI’s GPT-3 and GPT-4 are the foundation model that enable ChatGPT users to write code or Shakespearean sonnets. Since training these models typically requires vast resources, Industry now makes most of them, with academia only putting out a few. Companies release foundation models both to push the state-of-the-art forward and to give developers a foundation on which to build products and services. Google released the most in 2023.
Modelli open source e modelli proprietari
One of the hot debates in AI right now is whether foundation models should be open or closed, with some arguing passionately that open models are dangerous and others maintaining that open models drive innovation. The AI Index doesn’t wade into that debate, but instead looks at trends such as how many open and closed models have been released (another chart, not included here, shows that of the 149 foundation models released in 2023, 98 were open, 23 gave partial access through an API, and 28 were closed).
The chart above reveals another aspect: Closed models outperform open ones on a host of commonly used benchmarks. Maslej says the debate about open versus closed “usually centers around risk concerns, but there’s less discussion about whether there are meaningful performance trade-offs.”
Per approfondire.
L’intelligenza artificiale, AlphaFold 3 e il senso della vita. L’intervista su Ted a Demis Hassabis
Come si installa e come funziona Phi-3 di Microsoft. La nostra prova
Come funzionano le nuove estensioni di Gemini?