Why Do We Need Machine Learning?
Artificial Intelligence & Machine Learning
Machine Learning is the ability to learn and adapt patterns from the data by leveraging Computer Science and Statistical Inference. Humans provide computers ability to learn without explicitly programmed to do so, this may be in set boundaries of rules or not. ML is being applied anywhere the tasks are defined, repetitive, and consistent. It gets the edge over humans because of the capacity to perform multidimensional calculations with the speed available from Graphic Processing Units (GPUs) which are becoming faster with each year passing by.
Can it be a threat to human jobs?
To a certain extent it will be where there are manual repetitive tasks, but no human is better at performing tasks than a computer and no computer is better than a human supported by a computer. Therefore, with time it should become an extension of humans if it has not already become so.
Types of Machine Learning:
Supervised Learning – Labeled past inputs and labeled past outputs are available here and machines are coded to find patterns out of it. Based on those it will predict future unseen data.
Unsupervised Learning – The data used is not labeled but the algorithms coded into machines that will find structure or clusters from data, basically, data will be sorted based on the similarity and differences.
Semi-supervised Learning – As the name suggests, this is between supervised and unsupervised learning, and it has both labeled and unlabeled data. This becomes used when labeling is challenging, expensive or both. Furthermore, the learning algorithm adapts on labeled data and generalizes on unlabeled data or even new data that are the data points with similar characteristics.
Reinforcement Learning – It does not need any external training data. It creates data itself while training but based on a trial-and-error method. If the trial is successful, the outcome becomes rewarded. Furthermore, if not, it becomes penalized. The goal of the agent is to maximize the expected outcome of rewards over a period of given time.
Machine Learning Cycle:
1. Objective 2. Data Collection 3. Data Preparation 4. Modeling 5. Evaluation 6. Interpret 7. Deployment 8. Interpretation of Deployed Model
Deep Learning: –
It is part of machine learning which allows computers to mimic the human brain to a certain extent now. Only time will tell how powerful it will get with more research and technological advancement. Due to this mimicking ability the machine can ingest unstructured data like text, images, and speech by converting them in the form of vectors. Deep Learning consists of Neural Networks which are many layers of computational nodes or neurons as in any animal brain. As a signal is passed from one neuron to another in the brain similarly, each computational node receives and processes the signal and passes it to another node, in addition a neuron can be thought of as Logistic Regression and Neural Network as layers of these logistic regressions put together, logistic regression is Linear regression put in Sigmoid Function. Therefore, it allows us to capture non linear patterns more efficiently.
Working of Deep Learning:
The layer where data is fed in the input layer and the last layer from which predictions are obtained is the output layer and everything in between are hidden layers, furthermore, when the progression of calculations takes place from input layer to output layer, the process is called forward propagation and vice versa backward propagation.
Steps:
1. The activation of a neuron is the weighted sum of all activations from previous layers and bias, and this is composed with another function (example sigmoid as mentioned above). The weights are the strength of the neuron and bias is the indication of activity. We cannot directly change activations, only influence weights and biases. These weights and biases become initialized randomly. This is forward propagation.
2. Then cost calculated which becomes the error between predicted and real outcome. As a result, the goal is to find input weights and biases for the function which will minimize the cost. Therefore, slope becomes calculated for cost function and repeatedly moves in the direction of where slope minimizes. Furthermore, there is no guarantee that the minima will be the smallest value. AKA, gradient descent. The weights and biases become updated by subtracting the product of learning rate and partial derivative of loss (calculated by chain rule). With respect to them respectively. Lastly, this is back propagation.
3. The above process becomes repeated multiple times depending on the computing power and desired result. Known as the number of iterations.
Artificial Intelligence & Machine Learning
Why Do We Need Machine Learning?