Contains PDF course guide, as well as a lab environment where students can work through demonstrations and exercises at their own pace. This course presents advanced models available in IBM SPSS Modeler. The participant is first introduced to a technique named PCA/Factor, to reduce the number of fields to a number of core factors, referred to as components or factors. The next topics focus on supervised models, including Support Vector Machines, Random Trees, and XGBoost. Methods are reviewed on how to analyze text data, combine individual models into a single model, and how to enhance the power of IBM SPSS Modeler by adding external models, developed in Python or R, to the Modeling palette. If you are enrolling in a Self Paced Virtual Classroom or Web Based Training course, before you enroll, please review the Self-Paced Virtual Classes and Web-Based Training Classes on our Terms and Conditions page, as well as the system requirements, to ensure that your system meets the minimum requirements for this course. Terms and Conditions: Ingram Micro – https://www.ingrammicrotraining.com/Terms-of-use.aspx; IBM – http://www.ibm.com/training/terms
Version:
v18.2
Audience:
Data scientists Business analysts Experienced users of IBM SPSS Modeler who want to learn about advanced techniques in the software
Objectives:
Detail:
Pre-Requisites:
Knowledge of your business requirements Required: IBM SPSS Modeler Foundations (V18.2) course (0A069G/0E069G) or equivalent knowledge of how to import, explore, and prepare data with IBM SPSS Modeler v18.2, and know the basics of modeling. Recommended: Introduction to Machine Learning Models Using IBM SPSS Modeler (V18.2) course (0A079G/0E079G), or equivalent knowledge or experience with the product about supervised machine learning models (CHAID, C&R Tree, Regression, Random Trees, Neural Net, XGBoost), unsupervised machine learning models (TwoStep Cluster), and association machine learning models such as APriori.