Learn, apply and succeed with leading technologies

Reach your Potentials

Artificial Intelligence
Machine Learning
Business Intelligence
Data Science with Python
Big Data
Sentiment Analysis
Natural Language Processing
Emory University contemporary technologies:
Natural Language Processing Computer Vision Speech Recognition Anomaly Detection Time Series Analysis
Career Paths
Programs
Corporate Solutions

Why choose
Consort Institute at Emory University?

Agile Skills Development

The programs offered by Consort Institute at Emory are designed to allow professionals to complete a course on an accelerated timeline. All core sessions are held on Saturdays and each course can be completed within 7 to 12 weeks. This not only helps students to learn while being employed full time, but it also helps to carry the new learnt skills into their work environments in a robust and efficient manner. Although accelerated the programs have comprehensive skills and tools coverage and students receive continued support long after their graduation.

Applied Learning

Centered on leading technologies and tools, the Consort Institute at Emory University programs provide a good balance of theoretical as well the practical knowledge that is required to learn and implement the tools and technologies. The program is practitioner focused and driven by professional and business needs. Students gain the ability to think critically and address many complex issues in a multicultural and changing technology world.

Expert Faculty

The faculty members teaching at Consort Institute at Emory University are industry leading experts. They not only command a deep understanding of the subject matter, but also have vast practical wisdom in the tools and technologies they teach. They come from diverse industries and hence bring case examples that are relevant and directly related to the subject matter that they teach.

Project-based

The programs offered at Consort Institute at Emory are rigorous because of the emphasis placed on the capstone projects. These projects not only help the students to consolidate their knowledge, but it also gives them the real time project work experience. The capstone provides the foundation for our students to take on real time projects at their work place.

All Programs
Testimonials
  • Just wanted to send a quick note and let you know that after completing the Big Data program a year ago, I was offered a big time analytics job with a six figure salary for a company in Buckhead.
    Andy C.- Big Data at Emory
  • I appreciate the Big Data program at Emory and the whole experience of participating in events -- and for the extra offerings of help. I felt that no other program has this type of commitment and out of the way offering of help. Just wanted to share my appreciation
    Suneetha V.- Big Data at Emory
  • [Recently], I went through a challenging time which included job transition. I first heard about Emory's Big Data program through a job fair. A few months later, I landed a role in analytics. Excited, I also realized I lacked some key skills -- the following summer, I enrolled in Emory's Big Data program.
    Ed W.- Big Data at Emory
More Testimonials
Applied Skills
sq-sample33
Natural Language Processing
Natural language processing (NLP) is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. The history of natural language processing generally started in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem.[2] However, real progress was much slower, and after the ALPAC report in 1966, which found that ten-year-long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s, when the first statistical machine translation systems were developed.
sq-sample33
Speech Recognition
Speech recognition is the ability of a machine or program to identify words and phrases in spoken language and convert them to a machine-readable format. Rudimentary speech recognition software has a limited vocabulary of words and phrases, and it may only identify these if they are spoken very clearly.
sq-sample33
Intrusion/Anomaly Detection
In data mining, anomaly detection (also outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.[1] Typically the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are also referred to as outliers, novelties, noise, deviations and exceptions. In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts in activity. This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular unsupervised methods) will fail on such data, unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro clusters formed by these patterns.
sq-sample33
Stock Market Prediction
Stock market prediction is the act of trying to determine the future value of a company stock or other financial instrument traded on an exchange. The successful prediction of a stock's future price could yield significant profit. The efficient-market hypothesis suggests that stock prices reflect all currently available information and any price changes that are not based on newly revealed information thus are inherently unpredictable. Others disagree and those with this viewpoint possess myriad methods and technologies which purportedly allow them to gain future price information. The efficient market hypothesis posits that stock prices are a function of information and rational expectations, and that newly revealed information about a company's prospects is almost immediately reflected in the current stock price. This would imply that all publicly known information about a company, which obviously includes its price history, would already be reflected in the current price of the stock. Accordingly, changes in the stock price reflect release of new information, changes in the market generally, or random movements around the value that reflects the existing information set. Burton Malkiel, in his influential 1973 work A Random Walk Down Wall Street, claimed that stock prices could therefore not be accurately predicted by looking at price history. As a result, Malkiel argued, stock prices are best described by a statistical process called a "random walk" meaning each day's deviations from the central value are random and unpredictable. This led Malkiel to conclude that paying financial services persons to predict the market actually hurt, rather than helped, net portfolio return. A number of empirical tests support the notion that the theory applies generally, as most portfolios managed by professional stock predictors do not outperform the market average return after accounting for the managers' fees.
Location and Contact
Mr. Hamid Arjmand
404.406.9020
harjman@emory.edu
6 Executive Park Dr NE #100, Atlanta, GA 30329