List of Certificates
home
login | signup
Introduction to Artificial Intelligence, by Sebastian Thrun and Peter Norvig, Stanford (CS221)
Online Introduction to Artificial Intelligence is based on Stanford CS221, Introduction to Artificial Intelligence. This class introduces students to the basics of Artificial Intelligence, which includes machine learning, probabilistic reasoning, robotics, and natural language processing.

The objective of this class is to teach you modern AI. You learn about the basic techniques and tricks of the trade, at the same level we teach our Stanford students. We also aspire to excite you about the field of AI. Whether you are a seasoned professional, a college student, or a curious high school student - everyone can participate.

This online class will make this material available to a worldwide audience. But rather than just watching lectures online, you will participate. You will do homework assignments, take exams, participate in discussions with other students, ask questions of the instructors, and also get a final score.


Machine Learning, by Andrew Ng, Stanford (CS229A)
This course provides a broad introduction to machine learning, datamining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI). The course will also draw from numerous case studies and applications, so that you'll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.

Professor Andrew Ng is Director of the Stanford Artificial Intelligence Lab, the main AI research organization at Stanford, with 20 professors and about 150 students/post docs. At Stanford, he teaches Machine Learning, which with a typical enrollment of 350 Stanford students, is among the most popular classes on campus. His research is primarily on machine learning, artificial intelligence, and robotics, and most universities doing robotics research now do so using a software platform (ROS) from his group.


Building a Search Engine, by David Evans, University of Virginia, Udacity CS101
In this course you will learn key concepts in computer science and learn how to write your own computer programs in the context of building a web crawler.

At the end of this course you will have a rock solid foundation for programming in Python and built a working web crawler. This course will prepare you to take many of Udacity's more advanced courses.

David Evans is a Professor of Computer Science at the University of Virginia where he teaches computer science and leads research in computer security. He is the author of an introductory computer science textbook and has won Virginia's highest award for university faculty. He has PhD, SM, and SB degrees from MIT.

Udacity courses include lecture videos, quizzes and homework assignments. Multiple video "nuggets" make up each course unit. Each nugget is roughly five minutes or less, giving you the chance to learn piece by piece and re-watch short lesson portions with ease. Quizzes are embedded within the lecture videos and are meant to let you check-in with how completely you are digesting the course information.


Natural Language Processing, by Dan Jurafsky and Chris Manning, Stanford (CS224N)
This course covers a broad range of topics in natural language processing, including word and sentence tokenization, text classification and sentiment analysis, spelling correction, information extraction, parsing, meaning extraction, and question answering, We will also introduce the underlying theory from probability, statistics, and machine learning that are crucial for the field, and cover fundamental algorithms like n-gram language modeling, naive bayes and maxent classifiers, sequence models like Hidden Markov Models, probabilistic dependency and constituent parsing, and vector-space models of meaning.

Dan Jurafsky is Professor of Linguistics and Professor by Courtesy of Computer Science at Stanford University. Dan's research extends broadly throughout natural language processing as well as its application to the behavioral and social sciences.

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. He is a Fellow of the American Association for Artificial Intelligence and of the Association for Computational Linguistics, and is one of the most cited authors in natural language processing, for his research on a broad range of statistical natural language topics from tagging and parsing to grammar induction and text understanding.


Web Application Engineering, by Steve Huffman, reddit.com, Hipmunk, Udacity CS253
How to Build a Blog.
Starting from the basics of how the web works, this class will walk you through everything you need to know to build your own blog application and scale it to support large numbers of users.

In this project-based course your knowledge will be evaluated as you learn to build your own blog application! Learn everything Steve Huffman wished he would have known when he broke into the startup world.

Steve Huffman co-founded the social news site reddit.com in 2005 with his college roommate. reddit.com has since grown into one of the largest communities online. In 2010, he co-founded Hipmunk, a company to take the agony out of searching for plane and hotel tickets. Steve was named to Inc. Magazine's 30 under 30 list in 2011. He studied Computer Science at the University of Virginia.


Design of Computer Programs, by Peter Norvig, Google, Udacity CS212
Programming Principles
Learn new concepts, patterns, and methods that will expand your programming abilities, helping move you from a novice to an expert programmer.

Move along the path towards becoming an expert programmer! In this class you will practice going from a problem description to a solution, using a series of assignments.

Peter Norvig is Director of Research at Google Inc. He is also a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery. Norvig is co-author of the popular textbook Artificial Intelligence: A Modern Approach. Prior to joining Google he was the head of the Computation Sciences Division at NASA Ames Research Center.


Computing for Data Analysis, by Roger D. Peng, Johns Hopkins University
In this course you will learn how to program in R and how to use R for effective data analysis. You will learn how to install and configure software necessary for a statistical programming environment, discuss generic programming language concepts as they are implemented in a high-level statistical language. The course covers practical issues in statistical computing which includes programming in R, reading data into R, creating informative data graphics, accessing R packages, creating R packages with documentation, writing R functions, debugging, and organizing and commenting R code. Topics in statistical data analysis and optimization will provide working examples.

Roger D. Peng is an associate professor of Biostatistics at the Johns Hopkins Bloomberg School of Public Health and a Co-Editor of the Simply Statistics blog. He created the course Statistical Programming at Johns Hopkins where it has been taught for the past 8 years. Dr. Peng is also a national leader in the area of methods and standards for reproducible research and is the Reproducible Research editor for the journal Biostatistics. Dr. Peng is the author of more than a dozen software packages implementing statistical methods for environmental studies, methods for reproducible research, and data distribution tools.


Introduction to Mathematical Thinking, by Keith Devlin, Stanford
The goal of the course is to help you develop a valuable mental ability – a powerful way of thinking that our ancestors have developed over three thousand years.

Mathematical thinking is not the same as doing mathematics – at least not as mathematics is typically presented in our school system. School math typically focuses on learning procedures to solve highly stereotyped problems. Professional mathematicians think a certain way to solve real problems, problems that can arise from the everyday world, or from science, or from within mathematics itself. The key to success in school math is to learn to think inside-the-box. In contrast, a key feature of mathematical thinking is thinking outside-the-box – a valuable ability in today’s world. This course helps to develop that crucial way of thinking.

Dr. Keith Devlin is a co-founder and Executive Director of Stanford University's H-STAR institute and a co-founder of the Stanford Media X research network. He is a World Economic Forum Fellow and a Fellow of the American Association for the Advancement of Science. He has written 32 books and over 80 published research articles. He is a recipient of the Pythagoras Prize, the Peano Prize, the Carl Sagan Award, and the Joint Policy Board for Mathematics Communications Award. In 2003, he was recognized by the California State Assembly for his "innovative work and longtime service in the field of mathematics and its relation to logic and linguistics." He is "the Math Guy" on National Public Radio.


Web Intelligence and Big Data, by Gautam Shroff, IIT Delhi
The past decade has witnessed the successful of application of many AI techniques used at `web-scale’, on what are popularly referred to as big data platforms based on the map-reduce parallel computing paradigm and associated technologies such as distributed file systems, no-SQL databases and stream computing engines. Online advertising, machine translation, natural language understanding, sentiment mining, personalized medicine, and national security are some examples of such AI-based web-intelligence applications that are already in the public eye. Others, though less apparent, impact the operations of large enterprises from sales and marketing to manufacturing and supply chains. In this course we explore some such applications, the AI/statistical techniques that make them possible, along with parallel implementations using map-reduce and related platforms.

Dr. Gautam Shroff is Vice President & Chief Scientist, Tata Consultancy Services and heads TCS’ Innovation Lab in Delhi, India, and is teaching this course as in an adjunct capacity at IIT Delhi and IIIT Delhi.

Prior to joining TCS in 1998, Dr. Shroff had been on the faculty of the California Institute of Technology, Pasadena, USA and thereafter of the Department of Computer Science and Engineering at Indian Institute of Technology, Delhi, India. He has also held visiting positions at NASA Ames Research Center in Mountain View, CA, and at Argonne National Labs in Chicago.


Artificial Intelligence, by Dan Klein and Pieter Abbeel, Berkeley
CS188.1x is a new online adaptation of the first half of UC Berkeley's CS188: Introduction to Artificial Intelligence.

CS188.1x focuses on Behavior from Computation. It will introduce the basic ideas and techniques underlying the design of intelligent computer systems. A specific emphasis will be on the statistical and decision–theoretic modeling paradigm. By the end of this course, you will have built autonomous agents that efficiently make decisions in stochastic and in adversarial settings.

Dan Klein (PhD Stanford, MSt Oxford, BA Cornell) is an associate professor of computer science at the University of California, Berkeley.

Pieter Abbeel (PhD Stanford, MS/BS KU Leuven) joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in 2008.


Neural Networks for Machine Learning, by Geoffrey Hinton, University of Toronto
The course covered learning techniques for many different types of neural network including deep feed-forward networks, recurrent networks and Boltzmann Machines. It covered recent applications to speech, vision, and language, and used hands-on programming assignments.

Geoffrey Hinton was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. He received his PhD in Artificial Intelligence from Edinburgh in 1978 and spent five years as a faculty member in Computer Science at Carnegie-Mellon. He then moved to the Department of Computer Science at the University of Toronto where he directs the program on "Neural Computation and Adaptive Perception” for the Canadian Institute for Advanced Research.


Learning From Data, by Yaser Abu-Mostafa, Caltech
This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML has become one of the hottest fields of study today, taken up by graduate and undergraduate students from 15 different majors at Caltech. This course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The lectures follow each other in a story-like fashion, with the main topics listed below.

Yaser S. Abu-Mostafa is a Professor of Electrical Engineering and Computer Science at the California Institute of Technology. His main fields of expertise are machine learning and computational finance.

A real Caltech course, not a watered-down version.


Data Analysis, by Jeff Leek, Johns Hopkins University
This course is an applied statistics course focusing on data analysis. The course will begin with an overview of how to organize, perform, and write-up data analyses. Then we will cover some of the most popular and widely used statistical methods like linear regression, principal components analysis, cross-validation, and p-values. Instead of focusing on mathematical details, the lectures will be designed to help you apply these techniques to real data using the R statistical programming language, interpret the results, and diagnose potential problems in your analysis. You will also have the opportunity to critique and assist your fellow classmates with their data analyses.

Jeff Leek is an Assistant Professor of Biostatistics at the Johns Hopkins Bloomberg School of Public Health and co-editor of the Simply Statistics Blog. He created Data Analysis as a component of the year-long statistical methods core sequence for Biostatistics students at Johns Hopkins. The course has won a teaching excellence award, voted on by the students at Johns Hopkins, every year Dr. Leek has taught the course.


Statistics: Making Sense of Data, by Alison Gibbs and Jeffrey Rosenthal, University of Toronto
This course is an introduction to the key ideas and principles of the collection, display, and analysis of data to guide you in making valid and appropriate conclusions about the world.

This course will provide an intuitive introduction to applied statistical reasoning, introducing fundamental statistical skills and acquainting students with the full process of inquiry and evaluation used in investigations in a wide range of fields. In particular, the course will cover methods of data collection, constructing effective graphical and numerical displays to understand the data, how to estimate and describe the error in estimates of some important quantities, and the key ideas in how statistical tests can be used to separate significant differences from those that are only a reflection of the natural variability in data.


Synapses, Neurons and Brains, by Idan Segev, Hebrew University of Jerusalem
Probably the greatest challenge of the “21st century of the brain” is to understand how subcellular and cellular neuronal processes give rise to behavior – movement, perception, emotions, memory and creativity. This course will discuss, step-by-step, how modern molecular, optical, electrical, anatomical and theoretical methods have provided fascinating insights into the operation of the elementary building blocks of brains and, most importantly, how neuronal mechanisms underlie memory and learning processes. We will next discuss why computer simulations are so essential for understanding both neuronal “life ware” and the emergence of networks dynamics (e.g., as in the “Blue Brain Project”).

The course will start by highlighting a few recent brain-excitements, including treating the sick brain via electrical stimulation, recent attempts at “reading the brain code” for brain-machine interfaces, new neuro-anatomical techniques (“Brainbow” and connectomics) and physiological methods (optogenetics) that enables us to record/activate the living, behaving brain at single cell resolution. We will end by discussing emerging frontiers in brain research, including the interaction between brain research and the arts. As an added bonus, a lecture on perception, action, cognition and emotions will be taught by an acclaimed neuroscientists, Prof. Israel Nelken..


Introduction to Data Science, by Bill Howe, University of Washington
Commerce and research is being transformed by data-driven discovery and prediction. Skills required for data analytics at massive levels – scalable data management on and off the cloud, parallel algorithms, statistical modeling, and proficiency with a complex ecosystem of tools and platforms – span a variety of disciplines and are not easy to obtain through conventional curricula. Tour the basic techniques of data science, including both SQL and NoSQL solutions for massive data management (e.g., MapReduce and contemporaries), algorithms for data mining (e.g., clustering and association rule mining), and basic statistical modeling (e.g., linear and non-linear regression).

Bill Howe is the Director of Research for Scalable Data Analytics at the UW eScience Institute and holds an Affiliate Assistant Professor appointment in Computer Science & Engineering, where he leads a group studying data management, analytics, and visualization systems for science applications. Howe has received awards from Microsoft Research and honors for papers in scientific data management, and serves on a number of program committees, organizing committees, and advisory boards in the area, including the advisory board of the Data Science certificate program at UW. He holds a Ph.D. in Computer Science from Portland State University and a Bachelor's degree in Industrial & Systems Engineering from Georgia Tech.


Algorithms: Design and Analysis, by Tim Roughgarden, Stanford
In this course you will learn several fundamental principles of algorithm design. You'll learn the divide-and-conquer design paradigm, with applications to fast sorting, searching, and multiplication. You'll learn several blazingly fast primitives for computing on graphs, such as how to compute connectivity information and shortest paths. Finally, we'll study how allowing the computer to "flip coins" can lead to elegant and practical algorithms and data structures. Learn the answers to questions such as: How do data structures like heaps, hash tables, bloom filters, and balanced search trees actually work, anyway? How come QuickSort runs so fast? What can graph algorithms tell us about the structure of the Web and social networks? Did my 3rd-grade teacher explain only a suboptimal algorithm for multiplying two numbers?

Tim Roughgarden is an Associate Professor of Computer Science and (by courtesy) Management Science and Engineering at Stanford University, where he holds the Chambers Faculty Scholar development chair. At Stanford, he has taught the Design and Analysis of Algorithms course for the past eight years. His research concerns the theory and applications of algorithms, especially for networks, auctions and other game-theoretic applications, and data privacy. For his research, he has been awarded the ACM Grace Murray Hopper Award, the Presidential Early Career Award for Scientists and Engineers (PECASE), the Shapley Lecturership of the Game Theory Society, a Sloan Fellowship, INFORM's Optimization Prize for Young Researchers, and the Mathematical Programming Society's Tucker Prize.


Coding the Matrix: Linear Algebra through Computer Science Applications, by Philip Klein, Brown University
In this class, you will learn the concepts and methods of linear algebra, and how to use them to think about problems arising in computer science. You will write small programs in the programming language Python to implement basic matrix and vector functionality and algorithms, and use these to process real-world data to achieve such tasks as: two-dimensional graphics transformations, face morphing, face detection, image transformations such as blurring and edge detection, image perspective removal, audio and image compression, searching within an image or an audio clip, classification of tumors as malignant or benign, integer factorization, error-correcting codes, secret-sharing, network layout, document classification, and computing Pagerank (Google's ranking method).

Philip Klein is Professor of Computer Science at Brown University. He was a recipient of the National Science Foundation's Presidential Young Investigator Award, and has received multiple research grants from the National Science Foundation. He has been made an ACM Fellow in recognition of his contributions to research on graph algorithms. He is a recipient of Brown University's Award for Excellence in Teaching in the Sciences.


C++ For C Programmers, by Ira Pohl, University of California
The course provides an overview of C++ for the experienced C programmer. You will learn how C++ is more powerful than C. The C++ STL library will be featured. This library allows C++ programmers to code generically, efficiently and at a high level. You will learn how to write basic graph algorithms such as the shortest path algorithm. You'll then put this skill to use in a programming assignment aimed at producing an intelligent hex player.

Ira Pohl is a Professor of Computer Science at the University of California, Santa Cruz, and a Fellow of the ACM. The department is part of the Jack Baskin School of Engineering. Mr. Pohl has written widely on programming in C, C++, C# and Java.

His research interests include artificial intelligence, programming languages such as C#, C, C++ and Java, practical complexity problems, heuristic search methods, deductive algorithms, and educational and social issues. He enjoys walks and bike rides by the ocean, and can occasionally be found playing chess in a coffee house.


Mining Massive Datasets, by Jeff Ullman et al, Stanford
We introduce the student to modern distributed file systems and MapReduce, including what distinguishes good MapReduce algorithms from good algorithms in general. The rest of the course is devoted to algorithms for extracting models and information from large datasets. Students will learn how Google's PageRank algorithm models importance of Web pages and some of the many extensions that have been used for a variety of purposes. We'll cover locality-sensitive hashing, a bit of magic that allows you to find similar items in a set of items so large you cannot possibly compare each pair. When data is stored as a very large, sparse matrix, dimensionality reduction is often a good way to model the data, but standard approaches do not scale well; we'll talk about efficient approaches. Many other large-scale algorithms are covered as well, as outlined in the course syllabus.