###### #machinelearning ### Logistic Regression

This is logistic regression from scratch using python.  ### Linear Regression with any amount of variables  ### Multi-variable linear regression

This is linear regression with more than one input variable.  ### Linear Regression

This is linear regression from scratch. I used a Coursera course for help with this.  ### Naive Bayes Classifier

This is an example of the naive Bayes classifier. In this code, I used old data to determine whether it was a good day to golf.  ### Implementing Hierarchical Clustering into a dataset

This code implements Hierarchical Clustering into a real dataset of information regarding credit cards. It outputs the centroids of the number of clusters that the user wants the data to be split into.  ### Implementing K Means Clustering into a dataset

This code takes a real dataset of information of credit card users and uses K means Clustering to categorize the users into separate clusters. It outputs the lowest variation and the centroids of the clusters  ### Implementing KNN on a dataset

This code takes a real dataset of types of glass and uses K nearest neighbors to classify what the new type of glass(given by the user) is used for. The user needs to input the following to receive what type of glass they have: RI: refractive index Na: Sodium (unit measurement: weight percent in corresponding oxide, as are attributes 4-10) Mg: Magnesium Al: Aluminum Si: Silicon K: Potassium Ca: Calcium Ba: Barium Fe: Iron  ### Hierarchical Clustering Algorithm (with a fixed k-value)

Given a set of data, this program works to cluster points together based on proximity to each other (in accordance with Euclidean distance; with the closest points at any given point in time being clumped together for the next iteration of the loop). A more detailed explanation on what exactly this code is doing can be found in the URL below.  ### Hierarchical Clustering

This is an example of Hierarchical Clustering  ### K means Clustering example(without boolean statement to determine if clusters are changed)

This is an example of K means clustering. It uses methods to assign the data to specific clusters. There's only three clusters and the centroids are randomly assigned in the beginning. Since I haven't properly figured out how to implement a boolean expression to check if any points are changed, I just have a for loop which goes on 100 cycles. It's a work in progress.  ### Finding "k" number of neighbors to find closest color

This code asks the user for a point(x1, x2). Using the point and an list made initially, it finds "k" number of points closest to the users point. The code returns which color the majority of the "k" number of points are. Solved by appending in distances using a for/in loop. Then finding the nearest points using while loop. Then converts the data of the nearest points into the color of the points and adds up number of points in each color using for/in loop. Simple if/else loop at the end to figure out what color the majority of points are in. min() is finding the minimum value in the list len() is length of list list_name.remove(index) is removing data from a list using the data's index number 