Solution manual For Optimization in Operations Research 2ND editon by Ronald Rardin – Ebook PDF Instant Download/Delivery: 0134384555, 9780134384559
Full dowload Optimization in Operations Research 2ND editon after payment
Product details:
ISBN 10: 0134384555
ISBN 13: 9780134384559
Author: Ronald L. Rardin
This is the eBook of the printed book and may not include any media, website access codes, or print supplements that may come packaged with the bound book. Developing skills and intuitions through accessible optimization models and analysis. Rardin’s Optimization in Operations Research, Second Edition builds on the critically acclaimed first edition published nearly two decades ago and named Book of the Year in 1999 by the Institute of Industrial Engineers. The goal of the Second Edition is to make the tools of optimization modeling and analysis even more widely accessible to advanced undergraduate and beginning graduate students, as well as to researchers and working practitioners who use it as a reference for self-study. The emphasis lies in developing skills and intuitions that students can apply in real settings or later coursework. LIke the first, the Second Edition covers the full scope of optimization (mathematical programming), spanning linear, integer, nonlinear, network, and dynamic programming models and algorithms, in both single and multiobjective contexts. New material adds large-scale, stochastic and complexity topics, while broadly deepening mathematical rigor without sacrificing the original’s intuitive style. This edition also continues the author’s belief that making optimization materials accessible and exciting to readers of diverse backgrounds requires a continuing discourse on optimization modeling. Every algorithm and analytic principle is developed in the context of a brief story, and computational exercises often begin with a formulation step.
Optimization in Operations Research 2ND Table of contents:
Chapter 1 Problem Solving with Mathematical Models
1.1 OR Application Stories
1.2 Optimization and the Operations Research Process
Decisions, Constraints, and Objectives
Optimization and Mathematical Programming
Constant-Rate Demand Assumption
Back of Envelope Analysis
Constant-Rate Demand Model
Feasible and Optimal Solutions
1.3 System Boundaries, Sensitivity Analysis, Tractability, and Validity
EOQ Under Constant-Rate Demand
System Boundaries and Sensitivity Analysis
Closed-Form Solutions
Tractability versus Validity
1.4 Descriptive Models and Simulation
Simulation over MM’s History
Simulation Model Validity
Descriptive versus Prescriptive Models
1.5 Numerical Search and Exact Versus Heuristic Solutions
Numerical Search
A Different Start
Exact versus Heuristic Optimization
1.6 Deterministic Versus Stochastic Models
Random Variables and Realizations
Stochastic Simulation
Tradeoffs between Deterministic and Stochastic Models
1.7 Perspectives
Other Issues
The Rest of This Book
Exercises
Chapter 2 Deterministic Optimization Models in Operations Research
2.1 Decision Variables, Constraints, and Objective Functions
Decision Variables
Variable-Type Constraints
Main Constraints
Objective Functions
Standard Model
Solution:
2.2 Graphic Solution and Optimization Outcomes
Graphic Solution
Feasible Sets
Graphing Constraints and Feasible Sets
Solution:
Graphing Objective Functions
Solution:
Optimal Solutions
Solution:
Optimal Values
Unique versus Alternative Optimal Solutions
Solution:
Infeasible Models
Solution:
Unbounded Models
Solution:
2.3 Large-Scale Optimization Models and Indexing
Indexing
Indexed Decision Variables
Solution:
Indexed Symbolic Parameters
Solution:
Objective Functions
Indexed Families of Constraints
Solution:
Solution:
Pi Hybrids Application Model
How Models Become Large
2.4 Linear and Nonlinear Programs
General Mathematical Programming Format
Right-Hand Sides
Solution:
Linear Functions
Solution:
Linear and Nonlinear Programs Defined
Solution:
Two Crude and Pi Hybrids Models Are LPs
Indexing, Parameters, and Decision Variables for E-mart
Nonlinear Response
E-mart Application Model
2.5 Discrete or Integer Programs
Indexes and Parameters of the Bethlehem Application
Discrete versus Continuous Decision Variables
Solution:
Constraints with Discrete Variables
Solution
Bethlehem Ingot Mold Application Model
Integer and Mixed-Integer Programs
Solution:
Integer Linear versus Integer Nonlinear Programs
Solution:
Indexing, Parameters, and Decision Variables for Purdue Finals Application
Nonlinear Objective Function
Purdue Final Exam Scheduling Application Model
2.6 Multiobjective Optimization Models
Multiple Objectives
Constraints of the DuPage Land Use Application
DuPage Land Use Application Model
Conflict among Objectives
Solution:
2.7 Classification Summary
2.8 Computer Solution and AMPL
Solvers versus Modeling Languages
Indexing, Summations, and Symbolic Parameters
Solution:
Nonlinear and Integer Models
Solution:
Exercises9,10
References
Chapter 3 Improving Search
3.1 Improving Search, Local, and Global Optima
Solutions
Solutions as Vectors
Solution:
Example of an Improving Search
Neighborhood Perspective
Local Optima
Local Optima and Improving Search
Local versus Global Optima
Solution:
Dealing with Local Optima
3.2 Search with Improving and Feasible Directions
Direction-Step Paradigm
Solution:
Solution:
Improving Directions
Solution:
Feasible Directions
Solution:
Step Size: How Far?
Solution:
Search of the DClub Example
When Improving Search Stops
Detecting Unboundedness
Solution:
3.3 Algebraic Conditions for Improving and Feasible Directions
Gradients
Gradient Conditions for Improving Directions
Solution:
Objective Function Gradients as Move Directions
Solution:
Active Constraints and Feasible Directions
Solution:
Linear Constraints
Conditions for Feasible Directions with Linear Constraints
Solution:
Solution:
3.4 Tractable Convex and Linear Cases
Special Tractability of Linear Objective Functions
Constraints and Local Optima
Convex Feasible Sets
Solution:
Algebraic Description of Line Segments
Solution:
Solution:
Convenience of Convex Feasible Sets for Improving Search
Global Optimality of Linear Objectives over Convex Feasible Sets
Solution:
Convexity of Linearly Constrained Feasible Sets
Solution:
Global Optimality of Improving Search for Linear Programs
Blocking Constraints in Linear Programs
Solution:
3.5 Searching for Starting Feasible Solutions
Two-Phase Method
Two Crude Model Application Revisited
Artificial Variables
Phase I Models
Solution:
Starting Artificial Solution
Solution:
Phase I Outcomes
Solution:
Concluding Infeasibility from Phase I
Solution:
Big-M Method
Solution:
Big-M Outcomes
Solution:
Exercises
References
Chapter 4 Linear Programming Models
4.1 Allocation Models
Allocation Decision Variables
Forest Service Allocation Model
Solution:
4.2 Blending Models
Ingredient Decision Variables
Composition Constraints
Solution:
Swedish Steel Example Model
Ratio Constraints
Solution:
4.3 Operations Planning Models
Tubular Products Operations Planning Model
CFPL Decision Variables
Continuous Variables for Integer Quantities
CFPL Objective Function
CFPL Constraints
Balance Constraints
Solution:
CFPL Application Model
Solution:
4.4 Shift Scheduling and Staff Planning Models
ONB Decision Variables and Objective Function
ONB Constraints
Covering Constraints
ONB Shift Scheduling Application Model
Solution:
4.5 Time-Phased Models
Time-Phased Decision Variables
Time-Phased Balance Constraints
Solution:
IFS Cash Flow Model
Time Horizons
Solution:
4.6 Models with Linearizable Nonlinear Objectives
Maxisum Highway Patrol Application Model
Minimax and Maximin Objective Functions
Nonlinear Maximin Highway Patrol Application Model
Linearizing Minimax and Maximin Objective Functions
Linearized Maximin Highway Patrol Example Model
Solution:
Nonlinear VP Location Model
Min Deviation Objective Functions
Linearizing Min Deviation Objective Functions
Linearized VP Location Model
Solution:
4.7 Stochastic Programming
Solution:
Deterministic Model of QA Example
Stochastic Programming with Recourse
Stochastic Programming Modeling of the QA Application
Solution:
Extensive Form versus Large-Scale Techniques
Exercises
References
Chapter 5 Simplex Search for Linear Programming
5.1 LP Optimal Solutions and Standard Form
Global Optima in Linear Programs
Interior, Boundary, and Extreme Points
Solution:
Optimal Points in Linear Programs
Solution:
LP Standard Form
Converting Inequalities to Nonnegativities with Slack Variables
Solution:
Converting Nonpositive and Unrestricted Variables to Nonegative
Solution:
Standard Notation for LPs
Solution:
5.2 Extreme-Point Search and Basic Solutions
Determining Extreme Points with Active Constraints
Solution:
Adjacent Extreme Points and Edges
Solution:
Basic Solutions
Solution:
Existence of Basic Solutions
Solution:
Basic Feasible Solutions and Extreme Points
Solution:
5.3 The Simplex Algorithm
Standard Display
Initial Basic Solution
Simplex Directions
Solution:
Improving Simplex Directions and Reduced Costs
Solution:
Step Size and the Minimum Ratio Rule
Solution:
Updating the Basis
Solution:
Rudimentary Simplex Algorithm
Rudimentary Simplex Solution of Top Brass Example
Stopping and Global Optimality
Extreme-Point or Extreme-Direction
5.4 Dictionary and Tableau Representations of Simplex
Simplex Dictionaries
Solution:
Simplex Tableaux
Solution:
Simplex Algorithm with Dictionaries or Tableaux
Correspondence to the Improving Search Paradigm
Comparison of Formats
5.5 Two Phase Simplex
Starting Basis in the Two Phase Simplex
Solution:
Three Possible Outcomes for Linear Programs
Clever Clyde Infeasible Case
Solution:
Clever Clyde Optimal Case
Solution:
Clever Clyde Unbounded Case
5.6 Degeneracy and Zero-Length Simplex Steps
Degenerate Solutions
Solution:
Zero-Length Simplex Steps
Progress through Changing of Bases
Solution:
5.7 Convergence and Cycling with Simplex
Finite Convergence with Positive Steps
Solution:
Degeneracy and Cycling
5.8 Doing it Efficiently: Revised Simplex
Computations with Basis Inverses
Solution:
Updating the Representation of B−1
Solution:
Basic Variable Sequence in Revised Simplex
Solution:
Computing Reduced Costs by Pricing
Solution:
Revised Simplex Search of Top Brass Application
5.9 Simplex with Simple Upper and Lower Bounds
Lower and Upper-Bounded Standard Form
Solution:
Basic Solutions with Lower and Upper Bounds
Solution:
Unrestricted Variables with No Bounds
Increasing and Decreasing Nonbasic Variable Values
Solution:
Step Size with Increasing and Decreasing Values
Solution:
Case with No Basis Change
Lower- and Upper-Bounded Simplex Algorithm
Lower- and Upper-Bounded Simplex on Top Brass Application
Exercises
References
Chapter 6 Duality, Sensitivity, and Optimality in Linear Programming
6.1 Generic Activities Versus Resources Perspective
Objective Functions as Costs and Benefits
Choosing a Direction for Inequality Constraints
Inequalities as Resource Supplies and Demands
Equality Constraints as Both Supplies and Demands
Variable-Type Constraints
Variables as Activities
LHS Coefficients as Activity Inputs and Outputs
Solution:
6.2 Qualitative Sensitivity to Changes in Model Coefficients
Relaxing versus Tightening Constraints
Swedish Steel Application Revisited
Effects of Changes in Right-Hand Sides
Solution:
Effects of Changes in LHS Constraint Coefficients
Solution:
Effects of Adding or Dropping Constraints
Solution:
Effects of Unmodeled Constraints
Changing Rates of Constraint Coefficient Impact
Solution:
Effects of Objective Function Coefficient Changes
Solution:
Changing Rates of Objective Function Coefficient Impact
Solution:
Effects of Adding or Dropping Variables
Solution:
6.3 Quantifying Sensitivity to Changes in LP Model Coefficients: A Dual Model
Primals and Duals Defined
Dual Variables
Solution:
Dual Variable Types
Two Crude Application Again
Solution:
Dual Variables as Implicit Marginal Resource Prices
Solution:
Implicit Activity Pricing in Terms of Resources Produced and Consumed
Solution:
Main Dual Constraints to Enforce Activity Pricing
Solution:
Optimal Value Equality between Primal and Dual
Solution:
Primal Complementary Slackness between Primal Constraints and Dual Variable Values
Solution:
Dual Complementary Slackness between Dual Constraints and Primal Variable Values
Solution:
6.4 Formulating Linear Programming Duals
Form of the Dual for Nonnegative Primal Variables
Solution:
Duals of LP Models with Nonpositive and Unrestricted Variables
Solution:
Dual of the Dual is the Primal
Solution:
6.5 Computer Outputs and What If Changes of Single Parameters
CFPL Example Primal and Dual
Constraint Sensitivity Outputs
Right-Hand-Side Ranges
Solution:
Constraint What If’s
Variable Sensitivity Outputs
Objective Coefficient Ranges
Solution:
Variable What If’s
Dropping and Adding Constraint What If’s
Dropping and Adding Variable What If’s
6.6 Bigger Model Changes, Reoptimization, and Parametric Programming
Ambiguity at Limits of the RHS and Objective Coefficient Ranges
Solution:
Connection between Rate Changes and Degeneracy
Reoptimization to Make Sensitivity Exact
Parametric Variation of One Coefficient
Solution:
Assessing Effects of Multiple Parameter Changes
Parametric Multiple-RHS Change
Solution:
Parametric Change of Multiple Objective Function Coefficients
Solution:
6.7 Duality and Optimality in Linear Programming
Dual of the Dual
Weak Duality between Objective Values
Solution:
Unbounded and Infeasible Cases
Solution:
Complementary Slackness and Optimality
Solution:
Strong Duality and Karush-Kuhn-Tucker (KKT) Optimality Conditions for Linear Programs
Solution:
Models in Standard Form
Solution:
Solution:
Standard Form LPs in Partitioned Basic Format
Solution:
Basic Solutions in Partitioned Form
Solution:
Complementary Dual Basic Solutions
Solution:
Primal Simplex Optimality and Necessity of KKT Conditions
Solution:
6.8 Dual Simplex Search
Solution:
Choosing an Improving Direction
Determining a Dual Step Size to Retain Dual Feasibility
Changing the Primal Solution and Basis Update
Solution:
6.9 Primal-Dual Simplex Search
Solution:
Choosing an Improving Dual Direction
Determining a Dual Step Size
Solution:
Exercises
References
Chapter 7 Interior Point Methods for Linear Programming
7.1 Searching through the Interior
Interior Points
Objective as a Move Direction
Solution:
Boundary Strategy of Interior Point Methods
Solution:
Interior in LP Standard Form
Solution:
Projecting to Deal with Equality Constraints
Solution:
Improvement with Projected Directions
Solution:
7.2 Scaling with the Current Solution
Affine Scaling
Diagonal Matrix Formalization of Affine Scaling
Solution:
Affine-Scaled Standard Form
Solution:
Projecting on Affine-Scaled Equality Constraints
Solution:
Computational Effort in Interior Point Computations
7.3 Affine Scaling Search
Affine Scaling Move Directions
Solution:
Feasibility and Improvement of Affine Scaling Directions
Affine Scaling Step Size
Solution:
Termination in Affine Scaling Search
Affine Scaling Search of the Frannie’s Firewood Application
7.4 Log Barrier Methods for Interior Point Search
Barrier Objective Functions
Solution:
Problems with Gradient Directions
Newton Steps for Barrier Search
Solution:
Newton Step Barrier Search Step Sizes
Solution:
Impact of the Barrier Multiplier μ
Barrier Algorithm Multiplier Strategy
Newton Step Barrier Algorithm
Newton Barrier Solution of Frannie’s Firewood Application
7.5 Primal-Dual Interior-Point Search
KKT Optimality Conditions
Strategy of Primal-Dual Interior-Point Search
Feasible Move Directions
Management of Complementary Slackness
Step Size
Solving the Conditions for Move Directions
7.6 Complexity of Linear Programming Search
Length of Input for LP Instances
Complexity of Simplex Algorithms for LP
Complexity of Interior-Point Algorithms for LP
Exercises
References
Chapter 8 Multiobjective Optimization and Goal Programming
8.1 Multiobjective Optimization Models
Bank Three Example Objectives
Bank Three Example Model
Dynamometer Ring Design Model
Hazardous Waste Disposal Model
8.2 Efficient Points and the Efficient Frontier
Efficient Points
Identifying Efficient Points Graphically
Solution:
Efficient Frontier
Plots in Objective Value Space
Constructing The Efficient Frontier
Solution:
8.3 Preemptive Optimization and Weighted Sums of Objectives
Preemptive Optimization
Preemptive Optimization of the Bank Three Application
Solution:
Preemptive Optimization and Efficient Points
Preemptive Optimization and Alternative Optima
Weighted Sums of Objectives
Solution:
Weighted-Sum Optimization of the Hazardous Waste Application
Weighted-Sum Optimization and Efficient Points
8.4 Goal Programming
Goal or Target Levels
Goal Form of Bank Three Application
Soft Constraints
Deficiency Variables
Expressing Soft Constraints in Mathematical Programs
Solution:
Goal Program Objective Function: Minimizing (Weighted) Deficiency
Goal Linear Program Model of the Bank Three Application
Alternative Deficiency Weights in the Objective
Solution:
Preemptive Goal Programming
Preemptive Goal Programming of the Bank Three Application
Solution:
Preemptive Goal Programming by Weighting the Objective
Practical Advantage of Goal Programming in Multiobjective Problems
Solution:
Goal Programming and Efficient Points
Solution:
Modified Goal Program Formulation to Assure Efficient Points
Solution:
Exercises
References
Chapter 9 Shortest Paths and Discrete Dynamic Programming
9.1 Shortest Path Models
Nodes, Arcs, Edges, and Graphs
Solution:
Paths
Solution:
Shortest Path Problems
Classification of Shortest Path Models
Undirected and Directed Graphs (Digraphs)
Solution:
Two Ring Application Model
9.2 Dynamic Programming Approach to Shortest Paths
Families of Shortest Path Models
Functional Notation
Solution:
Optimal Paths and Subpaths
Negative Dicycles Exception
Solution:
Principle of Optimality
Functional Equations
Functional Equations for One Node to All Others
Solution:
Sufficiency of Functional Equations in the One to All Case
Solution:
Functional Equations for All Nodes to All Others
Solution:
Solving Shortest Path Problems by Linear Programming
9.3 Shortest Paths from One Node to All Others: Bellman–Ford
Solving the Functional Equations
Repeated Evaluation Algorithm: Bellman–Ford
Bellman–Ford Solution of the Two Ring Circus Application
Solution:
Justification of the Bellman–Ford Algorithm
Recovering Optimal Paths
Solution:
Encountering Negative Dicycles with Bellman–Ford
9.4 Shortest Paths from All Nodes to All Others: Floyd–Warshall
Floyd–Warshall Algorithm
Floyd–Warshall Solution of the Littleville Application
Solution:
Recovering Optimal Paths
Solution:
Detecting Negative Dicycles with Floyd–Warshall
9.5 Shortest Path from One Node to All Others with Costs Nonnegative: Dijkstra
Permanently and Temporarily Labeled Nodes
Least Temporary Criterion for Next Permanent Node
Dijkstra Algorithm Solution of the Texas Transfer Application
Solution:
Recovering Paths
Justification of the Dijkstra Algorithm
9.6 Shortest Paths from One Node to All Others in Acyclic Digraphs
Acyclic Digraphs
Solution:
Shortest Path Algorithm for Acyclic Digraphs
Acyclic Shortest Path Example
Longest Path Problems and Acyclic Digraphs
9.7 CPM Project Scheduling and Longest Paths
Project Management
CPM Project Networks
Solution:
CPM Schedules and Longest Paths
Critical Paths
Solution:
Computing an Early Start Schedule for the We Build Construction Application
Solution:
Late Start Schedules and Schedule Slack
Solution:
Acyclic Character of Project Networks
9.8 Discrete Dynamic Programming Models
Sequential Decision Problems
States in Dynamic Programming
Digraphs for Dynamic Programs
Dynamic Programming Solutions as an Optimal Path
Solution:
Dynamic Programming Functional Equations
Solution:
Dynamic Programming Models with Both Stages and States
Dynamic Programming Modeling of the President’s Library Application
Backward Solution of Dynamic Programs
Multiple Problem Solutions Obtained Simultaneously
9.9 Solving Integer Programs with Dynamic Programming
Dynamic Programming Modeling of Electoral Vote Knapsack
Solution:
9.10 Markov Decision Processes
Elements of MDP Models
Solution:
Solution of the Breast Cancer MDP
Exercises
References
Chapter 10 Network Flows and Graphs
10.1 Graphs, Networks, and Flows
Digraphs, Nodes, and Arcs
OOI Application Network
Minimum Cost Flow Models
Sources, Sinks, and Transshipment Nodes
OOI Application Model
Solution:
Total Supply = Total Demand
Solution:
Starting Feasible Solutions
Artificial Network Flow Model
Solution:
Time-Expanded Flow Models and Networks
Time-Expanded Modeling of Agrico Application
Solution:
Node–Arc Incidence Matrices and Matrix Standard Form
Solution:
Solution:
10.2 Cycle Directions for Network Flow Search
Chains, Paths, Cycles, and Dicycles
Cycle Directions
Maintaining Flow Balance with Cycle Directions
Solution:
Feasible Cycle Directions
Solution:
Improving Cycle Directions
Solution:
Step Size with Cycle Directions
Solution:
Sufficiency of Cycle Directions
Rudimentary Cycle Direction Search for Network Flows
Rudimentary Cycle Direction Search of the OOI Application
10.3 Cycle Cancelling Algorithms for Optimal Flows
Residual Digraphs
Solution:
Feasible Cycle Directions and Dicycles of Residual Digraphs
Improving Feasible Cycle Directions and Negative Dicycles of Residual Digraphs
Solution:
Using Shortest Path Algorithms to Find Cycle Directions
Solution:
Cycle Cancelling Solution of the OOI Application
Solution:
Polynomial Computational Order of Cycle Cancelling
10.4 Network Simplex Algorithm for Optimal Flows
Linear Dependence in Node–Arc Matrices and Cycles
Solution:
Spanning Trees of Networks
Spanning Tree Bases for Network Flow Models
Solution:
Network Basic Solutions
Solution:
Simplex Cycle Directions
Solution:
Network Simplex Algorithm
Network Simplex Solution of OOI Application
Solution:
10.5 Integrality of Optimal Network Flows
When Optimal Network Flows Must Be Integer
Solution:
Total Unimodularity of Node–Arc Incidence Matrices
10.6 Transportation and Assignment Models
Transportation Problems
Solution:
Standard Form for Transportation Problems
Solution:
Assignment Problems
Solution:
Balancing Unequal Sets with Dummy Elements
Integer Network Flow Solution of Assignment Problems
CAM Assignment Application Model
10.7 Hungarian Algorithm for Assignment Problems
Primal-Dual Strategy and Initial Dual Solution
Equality Subgraph
Labeling to Search for a Primal Solution in the Equality Subgraph
Dual Update and Revised Equality Subgraph
Solution Growth Along Alternating Paths
Computational Order of the Hungarian Algorithm
10.8 Maximum Flows and Minimum Cuts
Improving Feasible Cycle Directions and Flow Augmenting Paths
The Max Flow Min Cut Algorithm
Solution of Max Flow Application of Figure 10.25(a) with Algorithm 10E
Solution:
Equivalence of Max Flow and Min Cut Values
Computational Order of Algorithm 10E Effort
10.9 Multicommodity and Gain/Loss Flows
Multicommodity Flows
Multicommodity Flow Models
Solution:
Tractability of Multicommodity Flow Models
Flows with Gains and Losses
Gain and Loss Network Flow Models
Solution:
Tractability of Network Flows with Gains and Losses
10.10 Min/Max Spanning Trees
Minimum/Maximum Spanning Trees and the Greedy Algorithm
Solution of the WE Application 10.8 by Greedy Algorithm 10F
Solution:
Representing Greedy Results in a Composition Tree
ILP Formulation of the Spanning Tree Problem
Solution:
Computational Order of the Greedy Algorithm
Exercises
References
Chapter 11 Discrete Optimization Models
11.1 Lumpy Linear Programs and Fixed Charges
Swedish Steel Application with All-or-Nothing Constraints
ILP Modeling of All-or-Nothing Requirements
Swedish Steel Model with All-or-Nothing Constraints
Solution:
ILP Modeling of Fixed Charges
Swedish Steel Application with Fixed Charges
Solution:
11.2 Knapsack and Capital Budgeting Models
Knapsack Problems
Solution:
Capital Budgeting Models
Budget Constraints
Solution:
Modeling Mutually Exclusive Choices
Solution:
Modeling Dependencies between Projects
Solution:
NASA Application Model
11.3 Set Packing, Covering, and Partitioning Models
Set Packing, Covering, and Partitioning Constraints
Solution:
Minimum Cover EMS Model
Maximum Coverage EMS Model
Solution:
Column Generation Models
Solution:
11.4 Assignment and Matching Models
Assignment Constraints
CAM Linear Assignment Application Revisited
Linear Assignment Models
Solution:
Quadratic Assignment Models
Mall Layout Application Model
Solution:
Generalized Assignment Models
CDOT Application Model
Solution:
Matching Models
Superfi Application Model
Solution:
Tractability of Assignment and Matching Models
11.5 Traveling Salesman and Routing Models
Traveling Salesman Problem
Symmetric versus Asymmetric Cases of the TSP
Formulating the Symmetric TSP
Solution:
Subtours
Solution:
ILP Model of the Symmetric TSP
ILP Model of the Asymmetric TSP
Solution:
Quadratic Assignment Formulation of the TSP
Problems Requiring Multiple Routes
KI Truck Routing Application Model
11.6 Facility Location and Network Design Models
Facility Location Models
ILP Model of Facilities Location
Tmark Facilities Location Application Model
Solution:
Network Design Models
Wastewater Network Design Application Model
Solution:
11.7 Processor Scheduling and Sequencing Models
Single-Processor Scheduling Problems
Time Decision Variables
Conflict Constraints and Disjunctive Variables
Solution:
Handling of Due Dates
Processor Scheduling Objective Functions
Solution:
ILP Formulation of Minmax Scheduling Objectives
Solution:
Equivalences among Scheduling Objective Functions
Job Shop Scheduling
Custom Metalworking Application Decision Variables and Objective
Precedence Constraints
Solution:
Conflict Constraints in Job Shops
Solution:
Custom Metalworking Application Model
Exercises
References
Chapter 12 Exact Discrete Optimization Methods
12.1 Solving by Total Enumeration
Total Enumeration
Swedish Steel All-or-Nothing Application
Solution:
Exponential Growth of Cases to Enumerate
Principle 12.3
12.2 Relaxations of Discrete Optimization Models and Their Uses
Constraint Relaxations
Solution:
Linear Programming Relaxations
Solution:
Relaxations Modifying Objective Functions
Solution:
Proving Infeasibility with Relaxations
Solution:
Solution Value Bounds from Relaxations
Solution:
Optimal Solutions from Relaxations
Solution:
Rounded Solutions from Relaxations
Solution:
Stronger LP Relaxations
Solution:
Choosing Big-M Constants
Solution:
12.3 Branch and Bound Search
Partial Solutions
Completions of Partial Solutions
Solution:
Tree Search
Solution:
Incumbent Solutions
Solution:
Candidate Problems
Solution:
Terminating Partial Solutions with Relaxations
Solution:
LP-Based Branch and Bound
Branching Rules for LP-Based Branch and Bound
LP-Based Branch and Bound Solution of the River Power Application
Solution:
12.4 Refinements to Branch and Bound
Branch and Bound Solution of NASA Capital Budgeting Application
Rounding for Incumbent Solutions
Solution:
Branch and Bound Family Tree Terminology
Solution:
Parent Bounds
Solution:
Terminating with Parent Bounds
Solution:
Stopping Early: Branch and Bound as a Heuristic
Bounds on the Error of Stopping with the Incumbent Solution
Solution:
Depth First, Best First, and Depth Forward Best Back Sequences
Solution:
12.5 Branch and Cut
Valid Inequalities
Solution:
Branch and Cut Search
Branch and Cut Solution of the River Power Application
Solution:
12.6 Families of Valid Inequalities
Gomory Cutting Planes (Pure Integer Case)
Gomory Mixed-Integer Cutting Planes
Families of Valid Inequalities from Specialized Models
Solution:
12.7 Cutting Plane Theory
The Convex Hull of Integer Feasible Solutions
Solution:
Linear Programs over Convex Hulls
Faces, Facets, and Categories of Valid Inequalities
Solution:
Affinely Independent Characterization of Facet-Inducing Valid Inequalities
Solution:
Partial Dimensional Convex Hulls and Valid Equalities
Solution:
Exercises
References
Chapter 13 Large-Scale Optimization Methods
13.1 Delayed Column Generation and Branch and Price
Models Attractive for Delayed Column Generation
Partial Master Problems
Generic Delayed Column Generation Algorithm
Application of Algorithm 13A to Application 13.1
Generating Eligible Columns to Enter
Solution:
Branch and Price Search
13.2 Lagrangian Relaxation
Lagrangian Relaxations
Solution:
Tractable Lagrangian Relaxations
Lagrangian Relaxation Bounds and Optima
Solution:
Lagrangian Duals
Lagrangian versus Linear Programming Relaxation Bounds
Lagrangian Dual Objective Functions
Subgradient Search for Lagrangian Bounds
Application of Subgradient Search to Numerical Example
13.3 Dantzig–Wolfe Decomposition
Reformulation in Terms of Extreme Points and Extreme Directions
Reformulation from GB Application 13.4 Subproblems
Delayed Generation of Subproblem Extreme-Point and Extreme-Direction Columns
Dantzig–Wolfe Solution of GB Application 13.4
13.4 Benders Decomposition
Benders Decomposition Strategy
Optimality in Benders Algorithm 13E
Solution of Heart Guardian Application 13.5 with Benders Algorithm 13E
Exercises
References
Chapter 14 Computational Complexity Theory
14.1 Problems, Instances, and the Challenge
The Challenge
14.2 Measuring Algorithms and Instances
Computational Orders
Solution:
Instance Size as the Length of an Encoding
Solution:
Expressions for Encoding Length of All a Problem’s Instances
Solution:
14.3 The Polynomial-Time Standard for Well-Solved Problems
Solution:
14.4 Polynomial and Nondeterministic-Polynomial Solvability
Decision versus Optimization Problems
Class P – Polynomially Solvable Decision Problems
Class NP – Nondeterministic-Polynomially Solvable Decision Problems
Polynomial versus Nondeterministic Polynomial Problem Classes
Solution:
14.5 Polynomial-Time Reductions and NP-Hard Problems
Polynomial Reductions between Problems
NP-Complete and NP-Hard Problems
Solution:
14.6 P versus NP
The P≠NP Conjecture
14.7 Dealing with NP-Hard Problems
Special Cases
Pseudo-Polynomial Algorithms
Average Case Performance
Stronger Relaxations and Cuts for B&B and B&C
Specialized Heuristics with Provable Worst-Case Performance
General Purpose Approximate/Heuristic Algorithms
Exercises
References
Chapter 15 Heuristic Methods for Approximate Discrete Optimization
15.1 Constructive Heuristics
Rudimentary Constructive Search Algorithm
Greedy Choices of Variables to Fix
Greedy Rule for NASA Application
Solution:
Constructive Heuristic Solution of NASA Application
Solution:
Need for Constructive Search
Constructive Search of KI Truck Routing Application
15.2 Improving Search Heuristics for Discrete Optimization INLPs
Rudimentary Improving Search Algorithm
Discrete Neighborhoods and Move Sets
Solution:
NCB Application Revisited
Choosing a Move Set
Solution:
Rudimentary Improving Search of the NCB Application
Solution:
Multistart Search
Solution:
15.3 Tabu and Simulated Annealing Metaheuristics
Difficulty with Allowing Nonimproving Moves
Tabu Search
Tabu Search of the NCB Application
Solution:
Simulated Annealing Search
Simulated Annealing Search of NCB Application
Solution:
15.4 Evolutionary Metaheuristics and Genetic Algorithms
Crossover Operations in Genetic Algorithms
Managing Genetic Algorithms with Elites, Immigrants, Mutations, and Crossovers
Solution Encoding for Genetic Algorithm Search
Genetic Algorithm Search of NCB Application
Exercises
References
Chapter 16 Unconstrained Nonlinear Programming
16.1 Unconstrained Nonlinear Programming Models
USPS Single-Variable Application Model
Neglecting Constraints to Use Unconstrained Methods
Curve Fitting and Regression Problems
Linear versus Nonlinear Regression
Solution:
Regression Objective Functions
Custom Computer Curve Fitting Application Model
Solution:
Maximum Likelihood Estimation Problems
PERT Maximum Likelihood Application Model
Solution:
Smooth versus Nonsmooth Functions and Derivatives
Solution:
Usable Derivatives
Solution:
16.2 One-Dimensional Search
Unimodal Objective Functions
Golden Section Search
Golden Section Solution of USPS Application
Solution:
Bracketing and 3-Point Patterns
Finding a 3-Point Pattern
Solution:
Quadratic Fit Search
Quadratic Fit Solution of USPS Application
Solution:
16.3 Derivatives, Taylor Series, and Conditions for Local Optima in Multiple Dimensions
Improving Search Paradigm
Local Information and Neighborhoods
First Derivatives and Gradients
Second Derivatives and Hessian Matrices
Taylor Series Approximations with One Variable
Taylor Series Approximations with Multiple Variables
Stationary Points and Local Optima
Solution:
Saddle Points
Hessian Matrices and Local Optima
Solution:
Solution:
16.4 Convex/Concave Functions and Global Optimality
Convex and Concave Functions Defined
Solution:
Sufficient Conditions for Unconstrained Global Optima
Convex/Concave Functions and Stationary Points
Solution:
Tests for Convex and Concave Functions
Solution:
Unimodal versus Convex/Concave Objectives
16.5 Gradient Search
Gradient Search Algorithm
Gradient Search of Custom Computer Application
Solution:
Steepest Ascent/Descent Property
Solution:
Zigzagging and Poor Convergence of Gradient Search
16.6 Newton’s Method
Newton Step
Solution:
Newton’s Method
Newton’s Method on the Custom Computer Application
Solution:
Rapid Convergence Rate of Newton’s Method
Computational Trade-offs between Gradient and Newton Search
Starting Close with Newton’s Method
16.7 Quasi-Newton Methods and BFGS Search
Deflection Matrices
Quasi-Newton Approach
Guaranteeing Directions Improve
BFGS Formula
BFGS Search of Custom Computer Application
Solution:
Verifying Quasi-Newton Requirements
Solution:
Approximating the Hessian Inverse with BFGS
16.8 Optimization without Derivatives and Nelder–Mead
Nelder–Mead Strategy
Solution:
Nelder–Mead Direction
Solution:
Nelder–Mead Limited Step Sizes
Solution:
Nelder–Mead Shrinking
Solution:
Nelder–Mead Search of PERT Application
Exercises
References
Chapter 17 Constrained Nonlinear Programming
17.1 Constrained Nonlinear Programming Models
Beer Belge Location-Allocation Model
Solution:
Linearly Constrained Nonlinear Programs
Texaco Gasoline Blending Model
Engineering Design Models
Oxygen System Engineering Design Model
Solution:
17.2 Convex, Separable, Quadratic, and Posynomial Geometric Programming Special NLP Forms
Pfizer Optimal Lot Sizing Model
Convex Programs
Solution:
Special Tractability of Convex Programs
Separable Programs
Solution:
Special Tractability of Separable Programs
Quadratic Portfolio Management Model
Quadratic Programs Defined
Solution:
Special Tractability of Quadratic Programs
Cofferdam Application Model
Posynomial Geometric Programs
Solution:
Special Tractability of Posynomial Geometric Programs
17.3 Lagrange Multiplier Methods
Reducing to Equality Form
Lagrangian Function and Lagrange Multipliers
Solution:
Stationary Points of the Lagrangian Function
Solution:
Lagrangian Stationary Points and the Original Model
Lagrange Multiplier Procedure
Solution:
Interpretation of Lagrange Multipliers
Solution:
Limitations of the Lagrangian Approach
Solution:
17.4 Karush–Kuhn–Tucker Optimality Conditions
Fully Differentiable NLP Model
Complementary Slackness Conditions
Lagrange Multiplier Sign Restrictions
KKT Conditions and KKT Points
Solution:
Improving Feasible Directions and Local Optima Revisited
KKT Conditions and Existence of Improving Feasible Directions
Solution:
Sufficiency of KKT Conditions for Optimality
Necessity of KKT Conditions for Optimality
Solution:
17.5 Penalty and Barrier Methods
Penalty Methods
Penalty Treatment of the Service Desk Application
Solution:
Concluding Constrained Optimality with Penalties
Differentiability of Penalty Functions
Exact Penalty Functions
Managing the Penalty Multiplier
Sequential Unconstrained Penalty Technique (SUMT)
Barrier Methods
Barrier Treatment of Service Desk Application
Solution:
Converging to Optimality with Barrier Methods
Managing the Barrier Multiplier
Sequential Unconstrained Barrier Technique
17.6 Reduced Gradient Algorithms
Standard Form for NLPs with Linear Constraints
Conditions for Feasible Directions with Linear Constraints
Bases of the Main Linear Equalities
Basic, Nonbasic, and Superbasic Variables
Solution:
Maintaining Equalities by Solving Main Constraints for Basic Variables
Solution:
Active Nonnegativities and Degeneracy
Reduced Gradients
Solution:
Reduced Gradient Move Direction
Solution:
Line Search in Reduced Gradient Methods
Solution:
Basis Changes in Reduced Gradient Methods
Solution:
Reduced Gradient Algorithm
Reduced Gradient Search of Filter Tuning Application
Major and Minor Iterations in Reduced Gradient
Second-Order Extensions of Reduced Gradient
Generalized Reduced Gradient Procedures for Nonlinear Constrants
17.7 Quadratic Programming Methods
General Symmetric Form of Quadratic Programs
Quadratic Program Form of the Filter Tuning Application
Solution:
Equality-Constrained Quadratic Programs and KKT Conditions
Solution:
Direct Solution of KKT Conditions for Quadratic Programs
Solution:
Active Set Strategies for Quadratic Programming
Step Size with Active Set Methods
Solution:
Stopping at a KKT Point with Active Set Methods
Solution:
Dropping a Constraint from the Active Set
Solution:
Active Set Solution of the Filter Tuning Application
17.8 Sequential Quadratic Programming
Lagrangian and Newton Background
Sequential Quadratic Programming Strategy
Application of Algorithm 17E to Modified Pfizer Application l7.9
Approximations to Reduce Computation
17.9 Separable Programming Methods
Pfizer Application 17.4 Revisited
Solution:
Piecewise Linear Approximation to Separable Functions
Solution:
Linear Program Representation of Separable Programs
Correctness of the LP Approximation to Separable Programs
Solution:
Convex Separable Programs
Difficulties with Nonconvex Separable Programs
17.10 Posynomial Geometric Programming Methods
Posynomial Geometric Program Form
Cofferdam Application Revisited
Solution:
Logarithmic Change of Variables in GPs
People also search for Optimization in Operations Research 2ND:
optimization in operations research 2nd edition
optimization in operations research (2nd edition) solutions pdf
open problems in operations research
operation optimization meaning
Reviews
There are no reviews yet.