- Blockchains
- Bots and Agents
- Cognitive computing
- Knowledge representation
- NLP
- Probabilistic programming
- Reinforcement learning
- Vision

Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. The problem, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. In economics and game theory, reinforcement learning may be used to explain how equilibrium may arise under bounded rationality. In machine learning, the environment is typically formulated as a Markov Decision Process (MDP), as many reinforcement learning algorithms for this context utilize dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible.

OpenAI released Spinning Up yesterday. It is an educational resource for anyone who wants to become a skilled deep learning practitioner. Spinning Up has many examples in reinforcement learning, documentation, and tutorials. The inspiration to build Spinning Up comes from OpenAI Scholars and Fellows initiatives. They observed that it’s possible for people with little-to-no experience [...]