Papers Read: 1.(Applied Intelligence) A Connectionist Approach for Solving Large Constraint Satisfaction Problems(1997) Search techniques based on backtracking algrithms works well for small and medium problem sizes. While dealing with larger problem sizes this become difficult due to large search spaces. Even parallel implementation becomes difficult due to centralised control by the problem solver. Distibuted approaches are based on Neural Networks(NN) or on stochastic learning automata. NN models are based on Hopfield model which suffer from the existence of local energy minima that do not represent feasible solutions. Annealing process is often introduced in the operation of the network and the resulting Boltzmann Machine model succeeds in escaping the equilibrium points. However it is also not efficient when it comes to large problem sizes. The Guarded Discrete Stochastic network approach which is an extension of the pure discrete Hopfield network manages to push the network out of these equilibrium points, the drawback being that there is no guaranteed convergence and oscillation may appear, since the connection between the guard units and network units are not symmetric. The extended stochastic Automata method considers an underlying Hopfield like network and is inherently parallel and asymptotically converges and is applicable to even medium sized problems. The method in the paper is based on the application of a group double-update technique applied to the operation of the discrete Hopfield-type neural network(or corresponding Boltzmann machine) that can be constructed for the solution of such problems. 2. Machine Learning for constraint solver design (2013) Given a particular problem or problem instance, we want to decide which decisions to make. ML approaches have been taken to configure to select among, and to tune the parameters of solvers in the related fields of mathematical programming, SAT and constraints. The paper uses algorithms which generate decision rues, decision trees, Bayesian classifiers, nearest neighbour and neural neworks and applied ML to a complex decision problem in Constraint programming(CP) and demonstrated that training a set of classifiers without intrinsic knowledge and combining their decisions can improve performance. The paper measured 37 attributes of the problem instances from constraint and variable statistics and attributes based on primal graph. viz. edge density, clustering coefficient, normalized degree, normalized standard deviation of degree, width of ordering, width of graph, variable domains, constraint arity, multiple shared variables, normalised mean contraints per variable, ratio of auxiliary variables to other variables, tightness, proportion of symmetric variables.etc. Not so useful: 1. Machine learned heuristiccs to improve Constraint Satisfaction (2004) The paper tries to show that Machine learning techniques can be used to define better heuristics than heursitics based on single features. Most heuristics follow certain general principles, first fail principle for variable selection(enumerate difficult variables first) and the best promise for value selection(choose the value that more likely belongs to a solution) 2. Constraint programming for itemset mining (2008) The paper formalizes some constraint based mining problems in terms of constraint programming terminology in terms of constraints like frequency, closedness, maximality and constraints that are monotonic, anitmonotonic and convertible as well as variations of these constriants such as d-closedness. 3. Learned Value-Ordering Heuristics for Constraint Satisfaction (2008) 4. Adaptive branching for CSP(2010) Most algorithms for CSP are based on exhaustive backtracking search interleaved with constraint propagation. Search is guided by variable and value ordering heuristics(VOH) and makes use of either d-way or 2 -way branching scheme. The classic VOH smallest domain(dom) selects the variable with the minimum domain size while other heuristics domain over dynamic degree(dom/deg and dom/ddeg) selects the variable with minimum ratio of domain size over respective dynamic degree. The degree of a variable is the number of constraints involving the variable and the dynamic degree of an unassigned variable is the number of constriants involving the variable and at one other unassigned variable. One of the most effcient general purpose VOHs is dom/wdeg which assigns a weight to each constraint initially set to one. Each time a costraint causes a conflict i.e. domain wipeout(DWO), its weight is incremented by 1. Each varible is assigned a weighted degree which is the sum of the weights over all constraints involving the variable and at least another unassigned variable. The dom/wdeg heuristic chooses the variable with the minimum ratio of current domain size to the weighted degree. The paper makes a detailed experimental comparison between 2 way branching and d way branching and presents 2 generic heuristics that can be used to dynamically adapt the search algorithm's branching scheme. 5. Continuous Optimization(Non-linear and linear programming) (2009) The paper discusses about the mathematical models that underlie optimization problems. 6. Adaptive constraint satisfaction (1996) 7. Genetic neural network approach for CSP (Access not available) 8. Dynamic JChoc: A distributed constraints reasoning platform for dynamically changing environments (2015) The paper talks about Distributed CSP(DisCSP) , Distributed Constraint Optimization Problems(DCOP) and Dynamic Distributed Constraints Satisfaction problems.