Introduction: Constraint satisfacation problem(CSP) is involved in finding a configuration that satisfes all the constraints in the problem. The solution to a CSP is performed as a form of search using methods like backtracking, constraint propagation or even local search. Papers read: 1. Neural Network based Constraint Satisfaction in Ontology Mapping (2008) This paper talks about using Interactive Activation and Competition(IAC) neural network for constraint satisfaction. Each node represents a hypothesis and the connection between nodes tells whether the hyptothesis is supporting or contradicting. Each connection is associated with some weight that supports or contradicts the relation and this weight is proportional to the strength of the constraint. The activation of a node is determined by 4 factors: initial activation, input from adjacent nodes, bias and external input. The paper proposes IAC based approach to find the global optimal solution that best satisfies ontology constraints. The supporting future work included exploring more complex constraints, optimizing weight matrix, implementing the neural network in parallel computing platforms to improve its efficiency etc 2. Reinforced Adaptive Large Neighborhood Search(LNS) (2011) The paper proposes a Reinforcement learning framework for adaptive selection of parameters of LNS. LNS is a local search technique using Constraint Programming(CP) to explore its neighbourhoods and is applicable for a wide range of problems like cumulative scheduling, vehicle routing, Job-shop schduling problems, Car sequencing problems.etc. LNS is effective when its parameters are chosen properly for the particular problem. A random assigning is generic and less performant and setting the right values is difficult task. The paper discusses about adaptive selection of LNS parameters and different applications of RL to LNS. LNS consists of repeating neigbourhood definition(choosing the fragments and relaxing them to original domains) and exploration(to explore the relaxed neighbourhood). The parameters of LNS are size of fragment, fragment selection procedure and limit on exploration step. 3. Multi-agent oriented constraint satisfaction (2002) The paper discusses about multi agent method for solving CSP's, where distributed agents represent variables and a 2 dimensional grid like environment corrresponds to the domains of the variables. Many problems in AI like Spatial and Temporal planning, diagnostics, decision support, qualitative and symbolic reasoning, robot planning.etc can be translated into CSP. The paper discusses about Generate and Test, backtracking, Min conflicts heuristics before moving to multi-agent approach. The multi-agent approach utilizes the idea of inconsistency reduction over complete initial assignment. A Distributred CSP is a CSP in which variables and constraints are semantically partitioned into sub-problems each of which is solved by an agent. Swarm is a formulation for simulating distributed multi-agent systems, which involves three key concepts: living environment, agents with reactive rules, and a schedule serving as a timetable to update the changes and dispatch agents’ actions. Finally the paper proposes a new approach called ERA(Environmnet, reactive rules and Agents) to solve distributed as well as general CSPs. The idea lies in distributed multi agent system having the same architecture as Swarm, which self organizes itself and when each agent follows it behavioural rules gradually evolves towards a global solution state. The main difference of ERA from local search is that the evaluation criteria of ERA is not the number of unsatisifed constriants as in local search but number of unsatisfied constraints for the value of the each variable. Each agent can only sense its local environment and apply some behavioral rules for governing its value selection moves and apply behavioral rules for governing value-selection moves. Even though ERA approach has been applied to n-queen problem and coloring problems the approach has yet to tested over wider set of CSPs. 4. Neural networks for Finite constraint satisfaction (1995) The paper presents a new modelisation technique where the knowledge representation is based on two fundamental types of constraint: choice constraint and exclusion constraint and the model is then implemented by means of several neural networks. 5. Random Subsets Support Learning a Mixture of Heuristics (2007) A self supervised learner learns effective combinations of heuristics to solve CSPs than by individual heuristics. Given a class of problems Adaptice Constriants Engine (ACE) with Relative Support Weight learning selects Advisors and learn weights so that the decisions supported by the largest weighted combination of strengths lead to effective strengths. ACE does a form of self-supervised reinforcement learning and the information available to it comes from the limimted experience it acquires when it finds a solution to a problem. The future scope includes the learning step limit, temiantion criteria for learning, full restart parameters and constrainedness of the problem class. [Not so useful] 6. Constraint Programming for Data Mining and Machine Learning(2010) This paper shows that off-the-shelf constraint programming techniques can be applied to various pattern mining and rule learning problems. The interest in constraints in data mining led to the development of solvers for problems known as closed itemset mining, maximal frequent itemset mining, discriminative itemset mining, and so on. The contribution of paper tells how constraint programming can be applied to pattern mining and rule learning programs. There is very little discussion on how CP can be extended using Data mining and ML. 7. Preferences in constraint satisfaction and optimization (2008) 8. Predicting Optimal Constraint Satisfaction Methods (2010) CSPs represent a wide variety of problems in planing, scheduling. etc. and it is difficult to determine before hand which algorithm will work best for a problem and requires specialized solvers. This work explores the use of machine learning to predict which solving method will be most effective for a given problem.