AI - Machine Learning Application of artificial intelligence that automates analytical model building by using algorithms that iteratively learn from data without being explicitly programmed where to look.

Machine learning is then computers improve at a task based on experience at that task but without explicit programming.

Arthur Samuel from IBM coins the term in 1959

ML data centric approach makes it great at learning by example, finding patterns in data and predicting outcomes

ML uses data to form hypothesis, new data exposes errores in hypothesis, measures the error gap and adjusts for hypothesis to fit

The error landscape with its peaks and valleys is called error function or loss function

ML is part data science and statistics, there is a strong probabilistic streak to it

The learning parts is the back flow retuning known as backpopagation

Learning methods: Supervised, unsupervised and reinforcement learning

Machine-learning algorithms use statistics to find patterns in massive amounts of data (numbers, words, images, clicks…). If it can be digitally stored, it can be fed into a machine-learning algorithm.

Deep learning is machine learning on steroids: it uses a technique that gives machines an enhanced ability to find—and amplify—even the smallest patterns. This technique is called a deep neural network. Deep because it has many, many layers of simple computational nodes that work together to munch through data and deliver a final result in the form of the prediction.

Machine and deep learning comes in three flavors: supervised, unsupervised, and reinforcement.

In supervised learning, the most prevalent, the data is labeled to tell the machine exactly what patterns it should look for. (Image classification, speech and text recognition, medical diagnostics, fraud detection)

In unsupervised learning, the data has no labels. The machine just looks for whatever patterns it can find (Visualizar networks, personal connecitions, find patters in nature)

Lastly, we have reinforcement learning, the latest frontier of machine learning. A reinforcement algorithm learns by trial and error to achieve a clear objective. It tries out lots of different things and is rewarded or penalized depending on whether its behaviors help or hinder it from reaching its objective. (Traffic management, market analysis, ) (image/png)(image/png)

Overfitting, model conforms so tightly to the idiosyncrasies of your¡r trains data it can no longer generalize enough dfor the real world.

Neural networks,

Naive bayes

Decision trees

Support vector classifier


February 9, 2024