Course Overview
In this course, students gain the knowledge and skills to plan, install, maintain, and manage a secure HPE Ezmeral Data Fabric File and Object Store cluster. Using lecture and labs, you learn how to design and install a cluster, and perform pre- and post-installation testing. You configure users, groups, and work with key features of an HPE Ezmeral Data Fabric cluster, including volumes, snapshots, and mirrors—learn how to use remote mirrors for disaster recovery. This course also covers monitoring and maintaining disks, nodes, and how to troubleshoot basic cluster problems.
Who should attend
System administrators who will be installing, configuring, and maintaining an HPE Ezmeral Data Fabric File and Object Store cluster environment
Prerequisites
Participants in this course should have:
- Basic Hadoop knowledge and intermediate Linux knowledge
- Experience using a Linux text editor such as vi
- Familiarity with the Linux command line options such as mv, cp, ssh, grep, and useradd
Course Objectives
By the end of the course, the participant should be able to:
- Demonstrate basic cluster administration skills
- Audit and prepare cluster hardware prior to installation
- Run pre-installation tests to verify performance
- Plan a service layout according to cluster configuration and business needs
- Describe the primary architectural components of an HPE Data Fabric installation (nodes, storage pools, volumes, containers, chunks, blocks)
- Use the UI installer or the manual method to install the HPE Ezmeral Data Fabric File and Object Store cluster
- Define and implement an appropriate node topology
- Define and implement an appropriate volume topology
- Set permissions and quotas for users and groups
- Set up email and alerts
- Locate and review configuration files used by the cluster
- Start and stop services
- Use Hadoop commands to perform basic functions
- Use “maprcli” commands to perform basic functions
- Use the MCS
- Assist with data ingestion
- Configure, monitor, and respond to alerts
- Detect and replace failed disks
- Detect and replace failed nodes
- Create and delete snapshots using both maprcli and the MCS
- Create and delete mirrors using both maprcli and the MCS
- Use mirrors and snapshots for data protection
- Add, remove, and upgrade ecosystem components
- Monitor and tune job performance
- Set up NFS access to the cluster
Course Content
- Data Fabric File and Object Store
- Module 2 - Prepare for Installation
- Module 3 - Install a Data Fabric Cluster
- Module 4 - Verify and Test Cluster
- Module 5 - Work with Volumes
- Module 6 - Work with Snapshots
- Module 7 - Work with Mirrors
- Module 8 - Configure Users and Cluster Parameters
- Module 9 - Configure Cluster Access
- Module 10 - Monitor and Manage the Cluster
- Module 11 - Disk and Node Maintenance
- Module 12 - Troubleshoot Cluster Problems