Pass the NetApp Certified AI Expert exam to stay ahead of the technology curve. Achieving this certification validates the skills and knowledge associated with NetApp AI solutions and related industry technologies.
Prerequisites
This exam is for technical professionals who have gained knowledge in the areas of building infrastructures for AI workloads and have six to 12 months’ technical experience with AI.
You should have:
- An understanding of NetApp AI solutions, AI concepts, AI Lifecycle, AI Software and Hardware architecture and common challenges.
Exams
Domain 1. AI overview (15%)
- Demonstrate the ability to train and inference
- Training, inferencing and predictions
- Describe machine learning benefits
- AI, machine learning, deep learning
- Differentiate the use between different algorithm types
- Supervised, unsupervised, reinforcement
- Describe how AI is used in varied industries
- Digital twins, agents, healthcare
- Describe convergence of AI, high-performance computing, and analytics
- Leveraging the same infrastructure for AI, HPC, and analytics
- Determine the use of AI on-premises, in the cloud, and at the edge
- Benefits, risks
Domain 2. AI lifecycle (27%)
- Determine the differences between predictive AI and generative AI
- Industry use of predictive and generative AI
- Describe the impact of predictive AI
- Classification, neural networks, reinforcement, determine preference
- Describe the impact of generative text, images, videos, decisions in Generative AI
- Transformer models, Hallucinations, retrieval augmented generation (RAG) vs. fine-tuning
- Determine how NetApp tools can enable data aggregating, data cleansing, data modeling
- BlueXP classification, XCP, CopySync
- Determine the requirements needed for model generation
- Data, code, compute and time, scenarios
- Compare the differences between model building and fine-tuning models
- Model building = data, code; Fine-tuning = existing model, data, code
- Determine the requirements needed for inferencing
- Loading model into memory (model size); retrieval augmented generation (RAG), or other data lookups (agents), NetApp data mobility solutions
Domain 3: AI Software Architectures (18%)
- Describe AI MLOps/LLMOps ecosystems and general use
- High-level view of AWS Sagemaker, Google VertexAI, Microsoft AzureML, Domino Data Labs, RunAI, MLflow, KubeFlow, TensorFlow Extended
- Determine the differences between Juypter notebooks vs pipelines
- Notebooks for experimentation, pipelines for iterative development (production)
- Describe how NetApp DataOps toolkit works
- Python; Kubernetes vs. standalone; basic functionality provided by NetApp DataOps Toolkit
- Demonstrate the ability to execute AI workloads at scale with Kubernetes
- Trident
- Describe the uses of BlueXP software tools to build AI solutions
- GenAI Toolkit, Workload Factory, how to securely use private data with Generative AI
Domain 4: AI Hardware Architectures (18%)
- Describe data aggregation topologies
- Warehouses, data lakes, and lakehouses
- Describe compute architectures used with AI workloads
- CPU, GPU - Nvidia, TPU, FPGA
- Describe network architecture used with AI workloads
- Ethernet vs. Infiniband; Relevance of RDMA and GPUDirect Storage
- Identify storage architectures used with AI workloads
- C-Series, A-Series, EF-Series, StorageGRID
- Determine the use cases of different protocols
- file, object, parallel file systems, POSIX, clients installed on hosts, etc., file vs object or both; Integrate file data with object-based services (cloud and on-prem), for analytics
- Determine the benefits of SuperPOD architectures with NetApp
- E Series, BeeGFS, integration with enterprise data
- Describe the uses cases for BasePod and OVX architectures
- AIPod, FlexPod AI, OVX
Domain 5: AI Common Challenges (22%)
- Determine how to size storage and compute for training and inferencing workloads
- C-Series vs. A-Series; GPU memory and chip architectures
- 5.2 Describe the solutions for code, data, and model traceability
- Snapshots and cloning
- 5.3 Describe how to access and move data for AI workloads
- SnapMirror and FlexCache. XCP, Backup and recovery, CopySync
- 5.4 Describe solutions to optimize cost
- Storage efficiencies, FabricPool, FlexCache, SnapMirror, Data Infrastructure Insights, Keystone
- 5.5 Describe solutions to secure storage for AI workloads
- Bad data = bad AI; Autonomous Ransomware Protection, Multi-Admin Verification
- 5.6 Describe solutions to maximize performance in AI workloads
- How to keep GPUs fully utilized, NetApp product positioning for specific workloads and architectures