AI Engineering & Strategy

Key Questions

?

What is the value chain for AI/LLMs? (Chips -> Foundational Models -> Compute/Inference -> Apps)

?

How does the industry think about learning vs inference, model size, and power consumption?

?

What are the differences between closed and open source LLMs?

?

How can we ensure the privacy and security of data used by AI systems?

?

What are the key challenges in integrating AI technologies into existing IT infrastructure?

?

What is the immediate and medium term future for LLMs, and where are the next leaps & improvements going to be at?

?

If we build, how can we avoid irreversible decisions that lock us into (soon to be) obsolete paths?

Learning Objectives

Learning Objectives

Track your progress as you learn

0%

Hard Truths

Reality Check

Obsolescence cycle is extremely fast, so we must avoid significant sunk costs and irreversible decisions.

Reality Check

Fast pace of product and technological improvements challenges rigid/aged organizational structures - impossible to keep up if tons of red-tape is needed for chopping and changing.

Reality Check

Generative AI is not the be-all and end-all, not everything needs to be an LLM-based chatbot.

Reality Check

Lock-in or sunk costs is an issue, e.g. sinking tons of money into deploying/finetuning your Llama2 something only to have it obsolete 12 months later.

Reality Check

Long tail of training/finetuning yourself, building UIs, and maintaining is too much investment.

Resources

Apptitude / Curated by Zixian Chen

© 2024–2026. All Rights Reserved.