1. Automated Modelling and Solving in Constraint Programming (2010) Constraint programming can be divided very crudely into modeling and solving. Modeling defines the problem, in terms of variables that can take on different values, subject to restrictions o which combinations of variables are allowed. Solving finds values for all the varaibles that simultaneously satisfy all the constraints. This paper presents technical challenges in the areas of constraint model acquisition, formulation and reformulation, synthesis of filtering algorithms for global constraints and automated solving. Constraint model acquisition is related to concept learning, theory formation, knowledge acquisition and programming by demonstration. Synthesizing filtering algorithms for global constraints is related to automated programming. Automated solving is closely related to automated theorem proving and problem solving. Acquisition methods can be evaluated on the basis of the difficulty with which they can acquire specific problem classes or instances. Difficulty can be measured in terms of the number of solutions or non-solutions that needs to be presented to an acquisition system in order to acquire the model. Learning a CSP from training set, as a set of examples of its solutions and non-solutions. This kind of learning is called Constraint acquistion. Theoretically generic methods from Machine learning can be applied to learn an appropriate formulation of the target problem as a CSP. By the introduction of constraint directed generalization in constraint logic programs, a technique was developed to learn generalized spatial constraints given a set of input spacial constraints so that each spatial constraint is satisfied by the generalization. Standard ML methods do not take the characteristics of CSPs into account and are usually not sufficient to efficiently acquire constriants. Constraint acquisition is performed by searching through the hypothesis space of candidate relations on a scope of variables and removing those that violate the exmples in the training set. Various Challenges: The approaches heavily rely on supervision from an expert who labels examples or reacts to the questions posed by th acquisiton system. They are suitable for learning specific instances of a general problem class, but not capable of acquiring a description of the problem class itself.(Solution: inductive logic programming) Model Reformulation can result in CSPs that can be easily solved. Generating candidate sets of varaibles and implied constraints for a problem is difficult.However many problems that have significant structure can be exploited using techniques from ML, data mininig and discrete mathematics. e.g. Implied constraints could be mined form problme formulation using association rule mining techniques. Synthesis of Filtering Algortihms: Filtering is the process by which values from the domains of variables that cannot participate in the solution of a constraint are removed from their respective domains. 2. Boltzmann Machines: Constriant Satisfaction Networks that learn (1984) The paper presents a type of parallel constriant satisfaction network called as Boltzmann Machine, capable of learning the underlying constraints that characterize a domain simply by being shown examples from the domain. The network modifies the strengths of its connections so as to construct an internal generative model that produces examples with the same probability distribution as the examples it is shown. Then when shown any particular example, the network can interpret it by finding values of the variables in the internal model that would generate the example. When shown a partial example the network can complete it by finding internal variable values that generate the partial example and using them to generate the remainder. Boltzmann Machine is well suited to Constrianted Satisfaction tasks involving large numbers of weak constraints. Constraint satisfcation searches normally use strong contraints that must be satisifed by any solution. In some problem domains even the best possible solution violates some constraints and such domains uses weak constraints that incur a cost when violated. The quality of a solution is then determined by the total cost of all the constraints that it violates. 3. Convex optimization Optimization problem is extremely hard in general. The solution and method is very much dependent on the property of the objective function as well as properties of the functions involved in the inequality and equality cosntraints. There are no good methods for solving the general non-linear optimization problem. Finding locally optimal solutions can be efficient but then it leads to suboptimal solutions. Global optimizations of the other hand are too expensive. Few exceptions where finding global optimum is efficient is least squares, linear programming and convex optimization, where the first two are special types of Convex optimization problems. Also singular value decompostion(SVD), which correrponds to problem of finding the best rank-k approximation to a matrix under Frebenius norm, also has global solution. Many combinatorial optimization problems are identified to be convex optimization problems. Not so useful: 1.A survey of very large scale neighbourhood search techniques: (2001) Many optimization problems of practical interest are computationally intractable. Therefore a practical approach for solving such problems is to employ heuristic(approximation) algorithms that can find nearly optimal solutions wihtin a reasonable amount of computation time. Neighbourhood search algorithms(local search algorithms) are a wide class of improvement algorithms where an improving solution is found by searching the neighbourhood structure of the current solution. The larger the neighbourhood the longer it takes to search the neighbourhood at each iteration and hence the search should be in an efficient manner. The paper discusses about Constructive and improvement algorithms. Constructive algorithms builds a solution from scratch by assigning values to one or more decision variables at a time. An improvement algorithm on the other hand generally starts with a feasible solution and iteratively tries to obtain a better solution. For large problem sizes it is practical to only search a small portion of the neighbourhood. If Simplex algorithm for solving linear programs is viewed as Neighbourhood search algorithm then a column generation is a VLSN method. Similarly the augmentation techniques used for solving many network flows problems can be categorized as VLSN algorithms. e.g. Negative cost cycle canceling algorthms for solving min cost flow problem and augmenting path algortihm for solving matching problems. The survey focuses on 3 broad classes of Very large scale Neighbourhood search (VLSN) algorithms and mainly on applying search techniques to NP-Hard optimzaiton problems: Variable-depth methods where exponentially large neighbourhoods are searched heursitically, search using Network flow techniques ordynamic programming, for NP-hard problems the large neighbourhoods induced by sub-classes or restrictions of the original problem that are solvable in polynomial time. Network flow based improvement algortihms can be grouped into three categories: minimum cosr cylce finding methods, shortest path or dynamic programming based methods, and methods based on finding minimum cost assignments and matchings. 2.Automatically configuring constraint satisfaction programs: A case study (1996) This paper discusses about a learning system that synthesizes heuristic CSPrograms. It takes a library of generic algorithms and heuristics and speciallizes them for a particular application. The system uses 2 different methods for generating problem-specific seatch control rules: analytic and inductive. This paper focuses on varible and value-ordering heuristics and demonstrates that automatically sysnthesizing heuristic programs is a viable approach to solving CSPs. 3. Backjump based backtracking for CSPs (2001) The performance of backtracking algortihms can be improved by look-back and look-ahead methods. Look-back techniques extract information by analyzing failing search paths that are teriminated by dead ends. Look-ahead techniques use constraint propagation algorithms to avoid dead-ends altogether. The paper surveys a number of look-back variants including backjumping and constraint recordingwhich helps to avoid unnceccessary explorations in the search space. The paper also gives an overview of look-ahead methods suchas forward checking and dynamic variable ordering adn discusses their combination with backjumping. 4. Backtracking algortihms for CSPs The survey paper describes the basic back-search to all the improvements in the last 2 decades, including the look-back methods such as backjumping, constraint recording, backmarking and look-ahead methods such as forward checking and dynamic variable ordering.