From the course: Supervised Learning Essential Training

Unlock the full course today

Join today to access over 22,400 courses taught by industry experts or purchase this course individually.

Comparing networks and selecting final models

Comparing networks and selecting final models - Python Tutorial

From the course: Supervised Learning Essential Training

Start my 1-month free trial

Comparing networks and selecting final models

- [Narrator] There are many different evaluation metrics you can use to understand how well our algorithms perform. We can create a few types of charts to visualize and better understand our model performance. We can compare multiple models to each other by looking at their evaluation metrics like accuracy, precision, recall, F1 score, ROC curve, and AEC, or area under the curve. However, neural networks have more layers of complexity. So using strategic data sampling, data splitting and testing the number of model iterations helps us get a better understanding of what model architectures optimize for creating the best model in our domain. One technique leveraged frequently in industry is creating multiple data samples that allow for rapid experimentation during modeling. It's often suggested that data scientists start any modeling project with not only a training and testing set, but also a validation set to test hyper…

Contents