Machine learning: (Coursera)

 
























Costumers are grouped based on their purchasing hobbies to make separate market strategy for each group.





 
Feature selection is the selection of a subset of the original variable. 
Feature extraction is the projection of the original variable into a new variable usually the smaller set of variables.







The agents learn or perceive the information by interacting with the environment and select the decision based on the information and receive a reward which is a numerical value that the agent aims at maximizing over time. 
---> Learns step-by-step trail method

A reinforcement-learning agent learns by trial and error, just as animals and humans do when 
they learn to walk, talk, drive, cook, etc. Reinforcement learning finds application in many 
real-world problems, such as digital advertising, resource management, medicine, autonomous driving, 
automatic trading, and many others. In the next lectures, we will cover these three 
categories of machine learning techniques in more detail, starting with Supervised Learning ones.



=========  :Supervise learning: =========


examples:  












yellow markers are the examples and the black line is the possible model to explain the relationship between the price and size of the house.

Example: DNA analysis and drug discovery.


Examples: Object detection or text identification.



Examples: stock price predictions


Examples: Recommendation of movies on streaming platforms and development of spam filters.




 






REGRESSION PROBLEM

SALES,ESTIMATE  VALUE OF HOUSE, IDENTIFY SPAM MAIL, RECOGNIZE OBJECT IN A IMAGE, FIND GENRE OF THE SONG ====> CLASSIFICATION PROBLEM IN 
Supervised learning PROBLEM 

3 major components to specify the classification problem in the supervised learning




















Once a parametric method is trained, we no longer need the training dataset to make predictions, as we have an analytical expression for our model.
 

Ex: ARTIFICIAL NEUTRAL NETWORK, LINEAR REGRESSION





Non-parametric: complexity depends on the size of the training dataset. The method is memory based and needs a training dataset to make a prediction.






examples : 


































Due to the nonlinearity of the parameter set of NN, their optimization couldn't achieve closed-form. So, we solve by Stochastic Gradient descent.









































































WEEK 2: UNSUPERVISED LEARNING : CLUSTERING












 







Comments

Popular posts from this blog

NOMA-THz by Ding

Steps to start work on LLM from scratch:

Journal paper review process queries: