AI Engineering & Strategy
Questions You'll Answer
What is the value chain for AI/LLMs? (Chips -> Foundational Models -> Compute/Inference -> Apps)
How does the industry think about learning vs inference, model size, and power consumption?
What are the differences between closed and open source LLMs?
How can we ensure the privacy and security of data used by AI systems?
What are the key challenges in integrating AI technologies into existing IT infrastructure?
What is the immediate and medium term future for LLMs, and where are the next leaps & improvements going to be at?
If we build, how can we avoid irreversible decisions that lock us into (soon to be) obsolete paths?
What You'll Learn
See the opportunities and challenges across the AI value chain - GPU Chips, Foundational Models, Compute (Training, Inference) + Storage + Data Infrastructure, Application.
Understand what it takes to conceptualize, develop, & deploy AI products, e.g. compute infrastructure, toolchain
Know which LLMs are good, how good, for what
Know what everyone not on LLMs is missing out
Hard Truths
Obsolescence cycle is extremely fast, so we must avoid significant sunk costs and irreversible decisions.
Fast pace of product and technological improvements challenges rigid/aged organizational structures - impossible to keep up if tons of red-tape is needed for chopping and changing.
Generative AI is not the be-all and end-all, not everything needs to be an LLM-based chatbot.
Lock-in or sunk costs is an issue, e.g. sinking tons of money into deploying/finetuning your Llama2 something only to have it obsolete 12 months later.
Long tail of training/finetuning yourself, building UIs, and maintaining is too much investment.