Jindal School Of Management Ranking : Life Changing Experience
Jindal School Of Management Ranking : I did my BBA with a major in Finance. That was in 2010. I worked as a Finance Executive for 2 years after that. I went to Australia on a work visa from 2012 to 2015. I came to the USA back in 2015 as I had gotten my Green Card.
By then I had realized I wanted to be more on the tech side of any industry. I started learning about computers, programming languages, networking and others on my own and found the niche that I wanted to be in – Analytics, Machine Learning and FinTech.
I am currently working as an Operations Analyst at PNC Financial. I mainly deal with balancing checks against insurance claims provided by hospitals. It is not technical but I do use some Excel, SQL and Tableau for aggregating data and creating reports.
While working at PNC, I enrolled in the Master’s of Business Analytics program from the University of Texas at Dallas.
It was very much an eye opener for me.
I loved all the material that I have learned in my coursework.
I have learned how to clean, summarize and visualize data sets and apply statistical techniques on them.
I have used languages like Python, R, SAS and SQL.
I have used applications such as PowerBI and Tableau.
I have also had a Machine Learning course in which I worked with Python and the associated libraries to focus on optimizing well known algorithms like regression, clustering and deep neural networks.
The process of cleaning usually depends on the data set itself. Some would require simple cleaning like removing duplicates and special characters, while others require a major overhaul of the whole data set where I would have to join and delete columns as well as feature engineering.
This can be part of the ETL process or simply importing into Python or R and cleaning it up manually. However, I personally prefer to clean up a data set through Excel or PowerBi – less coding and the user interface is more intuitive.
Data is best summarized through Exploratory Data Analysis.
This is the phase after cleaning the data set.
It is done to get a basic understanding and outlook of the data that we are working with. It entails finding sums, counts, averages and other metrics. It can be used to find outliers. It can be about exploring missing data, which is a whole topic on its own. It can include bar charts or trend lines to see a basic pattern of the data set is like.
These can be done through PowerBI, Alteryx, or Tableau but I prefer to manually find outliers and missing data. I also check for correlation between the features and if some features would be best if removed. I personally would use R and its various imputation packages to fill in missing data.
I would handle outliers differently based upon the context. I would delete a chunk of the data if it is not that prevalent. I would lump them into neighboring categories. Or I would just keep them.
Unless I am applying a machine learning algorithm on the data set, my last step would include visualizing the data set. I may create a report using Access or create a Dashboard with Shiny (R’) or Tableau at this step also.
But in regard to the visuals themselves, I prefer Python’s libraries like Matplotlib and Seaborn. R has lots of packages for visualizations also. However, I do not recommend or use Excel for charting.
I had a course called “Advanced R” where I have had to work with Financial Data Sets. I have worked with S&P 500 data sets before with Python, SAS and Excel, but this course went more in-depth.
An internship is not a requirement for this program. I believe it is a good way to get exposure and learn on-the-job skills.