Category Archives: AI

A Quirky Idea -> Trillion Dollar Nvidia

Source: MIT Technology Review, Oct 2023

Nvidia is now a trillion-dollar company.

At the time it was desperate to find applications for its niche new hardware. “When you invent a new technology, you have to be receptive to crazy ideas,” says Nvidia CEO Jensen Huang. “My state of mind was always to be looking for something quirky, and the idea that neural networks would transform computer science—that was an outrageously quirky idea.”

AI just beat a human test for creativity.

Source: MIT Technology Review, Sep 2023

 

Who’s Better at Generating Innovative Ideas, ChatGPT or M.B.A. Students?

Source: MishTalk, Sep 2023
<original research article>

ChatGPT can generate ides far faster than humans. This gives them a huge edge in coming up with a few great ideas. For this study the professors gave ChatGPT and the students the same prompt.

Do LLMs Enhance Productivity in Generating Ideas?

The answer to this question is straightforward. ChatGPT-4 is very efficient at generating ideas. This question does not require much precision to answer. Two hundred ideas can be generated by one human interacting with ChatGPT-4 in about 15 minutes. A human working alone can generate about five ideas in 15 minutes (Girotra et al., 2010). Humans working in groups do even worse. In short, the productivity race between humans and ChatGPT is not even close.

For the focused idea generation task itself, a human using ChatGPT-4 is thus about 40 times more productive than a human working alone.

Chat-GPT generated the best-rated idea in our sample, with an 11% higher purchase probability than the best human idea.

The average quality of the top decile in each of the three pools also follows the same pattern as average quality— seeded Chat-GPT ≻ ChatGPT ≻ Humans.

Overall, we have 400 ideas, with an equal number generated by ChatGPT and humans. In the top 40 ideas (top decile) a full 35 (87.5%) are those generated by ChatGPT.

 

The Battle for Generative AI Dominance

Generative AI: “autocomplete function”

Source: ZeroHedge, Jul 2023

n a conversation with Goldman Sachs’ Jenny Grimberg, Marcus explains how generative artificial intelligence (AI) tools actually work today?

At the core of all current generative AI tools is basically an autocomplete function that has been trained on a substantial portion of the internet.

These tools possess no understanding of the world, so they’ve been known to hallucinate, or make up false statements.

The tools excel at largely predictable tasks like writing code, but not at, for example, providing accurate medical information or diagnoses, which autocomplete isn’t sophisticated enough to do.

Contrary to what some may argue, the professor explains that these tools don’t reason anything like humans.

AI machines are learning, but much of what they learn is the statistics of words, and, with reinforcement learning, how to properly respond to certain prompts.

They’re not learning abstract concepts.

 

Generative AI Business Models

Source: Andreesen Horowitz, Jan 2023

The stack can be divided into three layers:

  • Applications that integrate generative AI models into a user-facing product, either running their own model pipelines (“end-to-end apps”) or relying on a third-party API
  • Models that power AI products, made available either as proprietary APIs or as open-source checkpoints (which, in turn, require a hosting solution)
  • Infrastructure vendors (i.e. cloud platforms and hardware manufacturers) that run training and inference workloads for generative AI models

Looking ahead, some of the big questions facing generative AI app companies include:

  • Vertical integration (“model + app”). Consuming AI models as a service allows app developers to iterate quickly with a small team and swap model providers as technology advances. On the flip side, some devs argue that the product is the model, and that training from scratch is the only way to create defensibility — i.e. by continually re-training on proprietary product data. But it comes at the cost of much higher capital requirements and a less nimble product team.
  • Building features vs. apps. Generative AI products take a number of different forms: desktop apps, mobile apps, Figma/Photoshop plugins, Chrome extensions, even Discord bots. It’s easy to integrate AI products where users already work, since the UI is generally just a text box. Which of these will become standalone companies — and which will be absorbed by incumbents, like Microsoft or Google, already incorporating AI into their product lines?
  • Managing through the hype cycle. It’s not yet clear whether churn is inherent in the current batch of generative AI products, or if it’s an artifact of an early market. Or if the surge of interest in generative AI will fall off as the hype subsides. These questions have important implications for app companies, including when to hit the gas pedal on fundraising; how aggressively to invest in customer acquisition; which user segments to prioritize; and when to declare product-market fit.

perhaps the biggest winner in generative AI so far: Nvidia. The company reported $3.8 billion of data center GPU revenue in the third quarter of its fiscal year 2023, including a meaningful portion for generative AI use cases. And they’ve built strong moats around this business via decades of investment in the GPU architecture, a robust software ecosystem, and deep usage in the academic community. One recent analysis found that Nvidia GPUs are cited in research papers 90 times more than the top AI chip startups combined.

where will value accrue?

Of course, we don’t know yet. But based on the early data we have for generative AI, combined with our experience with earlier AI/ML companies, our intuition is the following.

There don’t appear, today, to be any systemic moats in generative AI. As a first-order approximation,

  • applications lack strong product differentiation because they use similar models;
  • models face unclear long-term differentiation because they are trained on similar datasets with similar architectures;
  • cloud providers lack deep technical differentiation because they run the same GPUs;
  • the hardware companies manufacture their chips at the same fabs.

There are, of course, the standard moats: scale moats (“I have or can raise more money than you!”), supply-chain moats (“I have the GPUs, you don’t!”), ecosystem moats (“Everyone uses my software already!”), algorithmic moats (“We’re more clever than you!”), distribution moats (“I already have a sales team and more customers than you!”) and data pipeline moats (“I’ve crawled more of the internet than you!”).

But none of these moats tend to be durable over the long term. And it’s too early to tell if strong, direct network effects are taking hold in any layer of the stack.

Based on the available data, it’s just not clear if there will be a long-term, winner-take-all dynamic in generative AI.

This is weird. But to us, it’s good news. The potential size of this market is hard to grasp — somewhere between all software and all human endeavors — so we expect many, many players and healthy competition at all levels of the stack.

We also expect both horizontal and vertical companies to succeed, with the best approach dictated by end-markets and end-users.

For example, if the primary differentiation in the end-product is the AI itself, it’s likely that verticalization (i.e. tightly coupling the user-facing app to the home-grown model) will win out.

Whereas if the AI is part of a larger, long-tail feature set, then it’s more likely horizontalization will occur. Of course, we should also see the building of more traditional moats over time — and we may even see new types of moats take hold.

Generative AI: Impact upon Employment

Source: Goldman Sachs, Apr 2023

Breakthroughs in generative artificial intelligence have the potential to bring about sweeping changes to the global economy, according to Goldman Sachs Research. As tools using advances in natural language processing work their way into businesses and society, they could drive a 7% (or almost $7 trillion) increase in global GDP and lift productivity growth by 1.5 percentage points over a 10-year period.

A recent study by economist David Autor cited in the report found that 60% of today’s workers are employed in occupations that didn’t exist in 1940. This implies that more than 85% of employment growth over the last 80 years is explained by the technology-driven creation of new positions, our economists write.

Hinton: “humanity is just a passing phase in the evolution of intelligence”

Source: ComputerWorld, May 2023

what’s the worst-case scenario that’s conceivable? 

“I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.


@ EMTech Digital 2023

Foundation Models

Why is ChatGPT Bad @ Math?

Source: ReTable, Mar 2023

this Artificial Intelligent (AI) chatbot called ChatGTP is a language model whose abilities are constrained by the quality and quantity of data it has been trained on. To understand the reason for this incapability and how ChatGPT works, it is better to take a closer look at ChatGPT’s (Chat Generative Pre-trained Transformer) underlying technology. ChatGPT is a text-based language model that has been developed based on limited data.

ChatGPT is Better at Generating Human-Like Responses Than Doing Perfect Math Calculations

Another point is that since ChatGPT is a text-based program, it has been trained to communicate with and generate human language. The  AI language model of ChatGPT has been structured to develop and form itself based on human feedback. This concept is called “next word prediction” or “language model”.

As an AI language model, ChatGPT is designed to process and generate natural language responses that sound like they were written by a human. This is achieved through the use of large amounts of training data, which allows the model to learn the patterns and structures of human language.

On the other hand, perfect math calculations require a high degree of accuracy and precision, and the ability to perform complex mathematical operations quickly and efficiently. While AI models like ChatGPT can certainly perform math calculations, they may not always be as accurate or efficient as dedicated math software or hardware.

Furthermore, the primary goal of ChatGPT is to simulate human-like conversations, which often involve more than just providing factual information. Conversations can involve humor, sarcasm, emotions, and other human-like qualities that cannot be captured through math calculations alone.

What is Language Model?

Essentially, a language model is a computational model that is trained on a large corpus of natural language data, such as text or speech.

The goal of a language model is to be able to predict the next word or sequence of words in a sentence or phrase, based on the context of the previous words. This is accomplished through the use of statistical and machine learning algorithms, which allow the model to learn the patterns and structures of human language.

The Language Model used by ChatGPT can be defined as determining the probability of which word comes next based on text data and statistics. In this way the AI language model can generate a relevant and satisfying response to your question.

With the language model, the chatbot forms its answer based on your words using transformer technology, which means it is sensitive to what you write and how you express yourself. In other words, ChatGPT is a text-based language model, not a calculator or a math genius.  Just like us, its knowledge and ability are limited to the scope of the data it has.

Can We Trust ChatGPT?

After having mentioned all of these limitations of ChatGPT, it is a very natural question “Can we trust ChatGPT in math?”.

The power of ChatGPT and such AI language models comes from their ability to generate human-like responses, not their accuracy. From that point, the probability of inaccuracy in math while using ChatGPT shouldn’t be a surprise; however, inadequate expressions or grammatical errors would be such a shock.