Papers read: 1. Using Neural Networks and Genetic Algorithms as Heuristics for NP complete Problems The paper discusses about applying Neural networks and Genetic algoritms as heuristics for SAT problems. Genetic algorithsm derice their power from differential feedback. Non solutions that are better than other non-solutions should have higher function values(in case of maximizing). For SAT, variable assignmnets that nearly satisfy the boolean expression should have higher function values than those assignments that barely satisfy the expression. This is achieved by basing the SAT function on the parse tree of the boolean expression. The assignment to the boolean variables is done at the leaf nodes. The function value at the root node is determined recursively. The application of neural netowrks involves selection of appropriate network representation. The constraints must be mapped into an energy function that describes the space to be searched. The activations of each node represents the hypothesis that a particular subexpression is true and let the activations of the nodes to be binary. There are several differences between the proposed network and Hopfield net. In a Hopfield network, the links are bi-directional and symmetric. Given the set of logical constraints, an energy function can be derived. Hence, this paper presents NP-complete problems that can be solved via polynomial-time transformation into equivalent SAT problems. Also the NN approach would be better for smaller problems while Genetic algorithms would be better for larger problems. 2. Experimental evaluation of Branching schemes for CSP 2010 The search strategy of a CP solver is determined by the variable and value ordering heuristics and the branching schemes it employ. Complete algorithms for CSP are based on exhaustive backtracking search interleaved with constraint propagation. The impact of variable and value ordering heuristics on search is widely studied. However emplying 2-way and d-way branching has typically not been studied. However few experimental studies comparing 2-way and d-way branching have not displayed significant differences between the two and there can marginal differences between the two. Also dichotomic domain splitting is considered in this paper, by splitting the current domain of the selected variable into 2 sets, basued on the lexicographical value ordering. Domain splitting is shown to achieve weaker propagation compared to d-way and 2-way branching. Even though it reduces the branching factor, it may result in a deeper search tree. Domain splitting is mostly used on optimization problems where the domain of the variables are very large. Set branching refers to any branching technique that using some criterion identfies values that can be grouped together and branched on as a set. The question then reduces to finding what is a measure of similarity between values and how are domains partitioned using the measure. Clustering for set branching is another important section discussed in this paper. In order to perform dynamic partitioning of domains into sets, the use of clustering algorithms from ML are suggested. K-means clustering is the algorithm used to create value clustering. The paper also compares the following schemes: 2-way, d-way, domain-splitting, ties set branching, clustering set branching. 3. Algorithms for the SAT Problem: A survey 1996 Solving a CSP entails minimizing the local inconsistnecy and finding a consistent value assignmnet(consistent labelling) to the variables subject to given constraints. Most NP complete problems are stated as CSPs. The proof that a problem is NP-complete implies an efficient way to transform the problem into a CSP. When no P-time algorithm is known for a CSP, it can be solved as a search algorithm. SAT can be formulated as CNF formulas or DNF. Instances of SAT can be formulated based on discrete or continuous variables. Instances of SAT can be formulated based on discrete or continuous variables. Discrete Constrained Feasibility Formulations: Goal is to satisfy all constraints. Possible formaulations are CNF and DNF. Discrete UnConstrained Formulations: A formulation for DNF and CNF exists. Discrete Constrained Formulations: Various forms of formulations exist. One of them is ILP problem. Another objective is to minimize the objective function. Continuous Formulations: In formulating a discrete instance of SAT in a continuous space, we transform discrete variables in the original formula into continuous variables in such a way that solutions to continuous problem are binary solutions to the original formula. Continuous UnConstrained Formulations, Continuous Constrained Formulations. The key in this approach lies in the transformation. When it does not smooth out local minima in the discrete space or when the solution density is low, continuous methods are much computationally expensive to apply than discrete methods. We can apply Lagrange multiplier methods to solve a continuous constrained optimization problem with a no linerar objective function and nonlinear constraints. A discrete search problem that can be approached using consistency checking or constraint resolution belongs to the class of discrete constrained methods. Alternatively the constraints that can be formulated into an objective function and minimized withot looking at any problem constraints belongs to the class of discrete unconstrained methods such as local search procedure. In the continuous search space variables, values, constraints and objective functions are defined qunatitatively with real values. If a continuous optimization problem is solved with explicit constraints, one uses continuous constrained methods, such as constrained minimization, primal methods and cutting plane methods. If the problem constraints are incorporated into an objective function then the problem is transformed into an unconstrained one, which can be solved by the unconstrained methods such as descent methods, conjugate direction methods and Newton methods. A number of major SAT algorithm classes can be identified: Discrete: Constrained: Davis Putnam(DP) Algorithm(1960) Resolution(1965) Consistency algoroithms(1971) Loveland's Davis Putnam or DPL(1978) Parllel Consistnecy chips(1986) Binary decision diagrams or BDD(1986) Chip and Conquer(1988) Local search and Backtracking(1989) DPL plus Heuristics or DPLH(1990) Backtracking and probing(1993) Parallel DP algorithm(1994) Matrix inequality system(1994) CSAT(1996) Unconstrained: Randomized Local search(1987) Parallel local search(1987) Local search for n-queen(1988) Unison algorithm and Hardware(1990) Local search Complexity(1991) Local search for 2-SAT(1991) Local search with traps(1992) Greedy local search - GSAT(1992) Continuous: Constrained: Branch and Bound(1986) Programming models(1988) Cutting plane(1988) Branch and Cut(1989) Interior point method(1989) Unconstrained: UniSAT models(1987) Global optimization(1987) Neural net models(1989) Global optimization and backtracking(1990) SAT14 algorithms(1991) The paper also discusses about most parallel SAT alogrithms in chronological order. Many search techniques such as stochastic optimization, simulated annealing, stochastic evolution and conflict minimization are either local search or variations of local search. Greedy local search alone cannot be adapted to perform on SAT formulas. The components crucial for development of an efficent local search algorithm for NP-hard and satisfiability are Min-Conflicts Heuristics, Best neighbour Heuristics, Random value/variable selection heuristics, Trap handling heuristics. To avoid getting trapped in local minama, random restarts and tunneling move the search to a new starting point and start over. These work well for continuous problems, however they may have difficulty indealing with SAT problmes whose objectives are integeres. One way to bring a search out of local minimum is to formulate a SAT problem as a constrained optimization. One way to implement this is to compute the sum of the constraints weighted by penalities and to update the penalties continuously during the search. The difficulties with this approach lies in the choice of the proper penalties. A more systematic approach is to use a Lagrangian formulation by formulating the SAT problems in discrete and continuous spaces. Reformulating the discrete unconstrianed problem into a continuous constrained problem may smooth out local minama in continuous space. A continuous objective value can indicate how close the constraints are being satisfied, hence providing additional guidance in leading to a satisfiable assignment. In general, transformational and non-transformational methods are 2 approaches to find global solutions to non-convex optimization problems. Non-transformational approaches include discarding methods, back to feasible region methods, and enumerative methods. Discarding methods drop solution once they are found to be infeasible. Back-to feasible region methods attempt to maintain feasibility by reflecting moves from boundaries if such moves went off the current feasible region. Enumerative methods are generally too expensive to apply except for problems with linear objectives and constraints and for bilinear programming problems. Transformational approaches on the other hand convert a problem into other form and some well know methods include Penalty, barrier and Lagrange-mulitplier methods. Penalty methods incorporate constraints into parts of objective function and require tuning renalty coefficients. Barrier methods are similar except that barriers are set up to avoid solutions from going out of feasible regions. In Lagrangian methods, Lagrange variables are introduced to gradually resolve cosntriants through iterative updates. Search methods can be classified into local and global algortihms: Local minimization algorithms such as gradient descent and newton's methods find local minima efficiently and work best in uni-modal problems. Global methods employ heuristic strategies to look for global minima and do not stop after finding a local minima. Gradients and Hessians can be used in both local and global methods. Continuous gradient based local search methods are time-consuming and continuous descent methods are several orders of magnitude more complex than discrete methods. Multi-space search: An optimization algortihm seeks a value assignment to variables such that all the constraints are satisfied and the performance objective is optimized. In multi-space search any active component related to the given input structure can be manipulated and formulated as an independent search space. The totality of all search spaces constitutes a multispace. Instead of being restricted in the value space, the multi-space is taken as search space and components other than the value can be manipulated and optimized as well. During the search, a multi-space search algorithm walks across the variable space and other active spaces, changes dynamically the input structure in terms of variable, parameters and other components and constructs systematically a intermediate instances. Each intermediate instance is solved by an optimization algorithm and the solution found is used as th initial solution to the next intermediate instance. In each search step, 2 fundamental operations are performed: traditional value search and structural reconfiguration of the intermediate instance. There are two prepocessing methods for satisfiability testing in multi-sapce search: partitioning input size and partitioning varaible domain. The paper concludes after comparing the performances of most algorithms mentioned before. Not so important: 1. Discovering knowledge using a Constraint based language 2011