Ledelse og Erhvervsøkonomi/Handelsvidenskabeligt Tidsskrift/Erhvervsøkonomisk Tidsskrift, Bind 30 (1966)

Measures of Performance for "Significant" Planning Problems.

Peter Mark Pruzan *)

1. Introduction.

The recent years have evidenced an extraordinary interest in the optimization of systems based upon the methods, techniques, and tools of operations research. Great success has been achieved in the optimal control of systems, i. e. the steering of existing systems so as to maximize some measure of system performance. However, when we regard the application of the scientific method to significant planning problems (e. g. investment planning for transportation networks) we find ourselves in a much less comfortable position. The long time lags between planning, decision-making, completion of a project and its usage give rise to serious doubts as to the adequacy and accuracy of the models we use to abstract reality. In particular, a major source of uncertainty with respect to the appropriateness of our models is the utility function or measure of performance to apply.

It is the intent of this paper to clarify and make precise the above remarks and to indicate a heuristic procedure based upon operations research methods and computer technology which can contribute to the formulation and solution of significant planning problems. To this extent, section 2 of the paper considers the nature of the "control problems" and how such problems are solved using optimization procedures. Section 3 then indicates the major difficulties encountered when the control methodology is applied to "significant" planning problems. Section 4 proposes a procedure which can overcome many of these difficulties.



*) Ph. D., Associate Professor, The Institute for Mathematical Statistics and Operations Research, Technical University of Denmark. This article is a modified version of a paper delivered at the NATO seminar Decision Problems in Connection with Traffic Planning, Rold, Denmark, 1965.

Side 92

2. The control problem.

The control problem can be defined as determining how to steer (operate) a system subject to disturbances and operating under various physical conditions so as to optimize the performance of the system, as measured by some utility function, over a time horizon. The following discussion presents a methodology, developed by operations research and other disciplines oriented towards systems optimization, used to formulate and solve such problems of an operating or control nature.

luiuiuiaic aim suivc aut.ll piuuicnis vi an upciaiiiig lv l-uiiuujl naiui A decision maker is said to have a problem if he must choose 01 out of a number of allowable courses of action where the alternativ can give different results, each of which has associated with it son utility or value1). We introduce the concept of a vector x(t) — (xi{i . . . , Xm(t)) as representing the state of the system at any future time Each of the elements of x(t) provides a measure of an aspect of the syste which is of importance to the decision maker in his problem situatio The initial or current state of the system is x(0). It is a fundament; assumption that to any state of the system at any time within the plai ning period we can assign a relative measure of utility, u[x(t) \x(o)] - the utility the decision maker will associate with the state x(t) give that the current state of the system is x(0)2). The utility function dependent upon x(0) since the decision maker's perception of the desi ability of any state is influenced by the current state. In order to sin plify the formulation of the utility function, constraints with respe to the permissable states are often imposed. This is equivalent to r< striding the possible course of action via the introduction of an infinite] laro-p npcrativp ntilitv tn he assnriafrp.fi with the unnermissahlp. state



1) As a decision maker we consider here an individual or a group of individuals who are in agreement as to the utility function (i. e. measure of value). Two or more persons who cannot be subsumed into a system with one decision maker each have a problem and can use the methodology described in this section. Questions as to who »ought« to be considered as the decision maker in a given situation or how a decision maker »ought« to formulate his utility function so as to obtain Candide's ever-elusive »le meilleur des mondes possibles« fall within the sphere of philosophy and will not be considered here.

2) To account for the likely fact that the utility associated with any state x(t) may be a function of t as well, t can simply be considered as an element of the state vector. For example x(t) = (xj(0» 0 where x^(t) = net income at time t and u[x(t) | x(0)] might be x^(t)-e-^ where X is a discount factor. In the most general case where also the form of the utility function is time-dependent, we would have to express the utility function as ut[x{t) x(0)].

Side 93

Having introduced the concept of utility, we may regard the decision problem as one of choosing that course of action which will in some sense maximize the accumulated utility which will accrue to the decision maker's system over the planning period. The most commonly followed procedure, although there are others, is to maximize the cumulative expected utility. That is, if dFj [x(t) x(0)] represents the probabilty that the ;'th course of action will result in state x(t) at time t given that the current state is x(Q), and if t is the length of the planning period, the problem is to choose that course of action which will maximize:


DIVL1995

over all j = 1, . . . , n. E is an expectation operator here, n is the
number of allowable alternatives under consideration and Rm is a mdimensional
metric space which spans x(t) over the period 0 to r3).

2.1. Adaptive control.

In most control problems the major source of difficulty is the determination of the distribution of the outcomes associated with any course of action. When faced with doubt as to the adequacy of one's abstractions, the common sense approach is to learn from experience. This common sense approach can be employed to improve our representations of the distribution function and a process where feedback information is used to automatically modify the distribution function will be referred to here as an adaptive process (see chapter 16 of reference 1). The extension of our model building techniques to include the automatic up-dating of stochastic representations is undergoing rapid development in many fields and promises to add a new dimension to the concepts of optimal control and automation.

A well established application of an adaptive control process can be found in the use of adaptive foreccisting. In many operating systems a decision to be made at any time depends upon a forecast of the effect of the decision upon future operations. For example, consider the decisionas to how many units of an item should be produced at any time so as to provide some optimal service-cost operation. The expected service and cost depend upon a forecast, in the form of a probability distribution,as to the demand for the item over a planning period; a decision



3) To include situations where x(t) can include both, continuous and/or discrete elements, the expectation operation is measured by a stjeltes integral.

Side 94

to produce any amount will result in a distribution of net inventory (including shortages) over the period and hence in an expected service level and expected "costs" for inventory storage and shortage. In order to automatically modify the forecasting so as to include the latest informationavailable, use has been made of various methods of "adaptive forecasting". This procedure has been succesfully applied in many companies where the number of different products prohibits the developmentof sophisticated "cause-and-effect" forecasting models and where emphasis has thus been placed upon relatively simple statistical forecastingschemes. One of the most widely used adaptive forecasting schemes is exponential smoothing (see refs. 2 and 3).

3. Some inconsistencies when this approach is applied to significant planning problems.

The major components of the general approach to problem formulation considered in section 2 were: a) the decision maker, b) the utility function, u[x(t) \x(o)], c) the set of allowable courses of action, j = 1, . . . , n and d) the joint density functions, dFj [x(t) | x(0)]. In evaluating alternative actions when faced with significant planning problems it is very difficult to identify and define each of these components. However, since once we have identified the decision maker, the number of alternative actions is usually rather limited4) and since we often can employ knowledge of the physical and economic processes involved to roughly determine the joint density functions, it is suggested that the component of problem formulation which is most difficult to define (i. e. develop) and which is most crucial with respect to the decision is the utility function5).

The source of this difficulty usually lies in the following querry:
"How can the decision maker interpret and express his preferences in



4) An interesting methodological question which we will not consider here is how much investigation should be employed in order to determine the set of allowable courses of action to be evaluated. Reducing the decision problem to one of choosing between relatively few alternatives may greatly simplify the decision making, but will require extensive preliminary investigation.

5) Unfortunately, theoretical and for empirical investigations of the sensivity of the choice of the optimal course of action with respect to the utility function are very rare. The development of parametric linear programming is somewhat of an exception to this statement. (Here the effect of variations in the parameters, but not the functional form, of the utility function are considered). The usual sensitivy analyses serve only to indicate how the expected utility varies, given a prescribed utility function, with changes in the course of action.

Side 95

situations where a decision can move the system to a new state "far
removed" from the current state and thus to a point in the ra-dimensional
state space with which he has little or no experience?"

This difficulty is usually compounded in significant planning problems due to the time lags which occur between the analysis, the decision, the completion of the project and the use of the project, (e. g. in the case of the underground transportation system now being considered in Denmark, project analysis time is estimated at 5 years, if the decision is made to start construction, 20 additional years may be used to complete the construction and the system used for say 100 years). In the remainder of this section we will consider some of the difficulties which are met when attempting to develop utility functions which are appropriate for use in decision making which can result in significant changes in the state of the system.

3.1. Possible inconsistencies when developing utility functions to guide significant planning decisions.

The fact that one seeks a function of a vector, u[x(t) \x(o)], usually results in attempts to simplify the form of the function. The three most usual procedures are what we will refer to as 1) the transformation method, 2) the separation method, and 3) a combination of the first two methods.

3.1.1. The transformation method.

If it is assumed that all of the state variables can be measured along a common scale, for example money, then much of the difficulty in describing the utility function is eliminated. Assume that there exist "value-wise transformation functions" Ti [xi{t]\, i = 1, . . . , m which transform xi{t) units to xi{t) units, so that the state of the system can be m replaced by an equivalent sum. of xi(l) units, xi(t) = xi(t) + JE Ti[xi(t)]. i=2

Then the problem of determining the ra+l dimensional utility space {x(t),u[x(t) \x(o)]}, for any t is reduced to finding the 2-dimensional utility space {xi(t),ui[xi(t) |x(0)]}6). As a result, the problem of finding the course of action which maximizes expected utility is reduced to determining the maximum of:



6) Ul[Xl'(t) | *(0)] = «[(*1(f) +21 Ti{Xi(t)}, K2(0),K2(0), .... xm(0))xm(0)) I *(0)] is a cut through i=2 (x2(0)) • • • , xm(®))- In other words, it specifies the relative utility of any value of x1 when all the other state variables are held at their initial values.

Side 96

DIVL2045

DIVL2047

This procedure is often followed in industrial applications of operations research and the common unit of measure is usually money. However, there arise serious logical inconsistencies if the transformation method is used in decision problems where a decision can result in a significant change in the state of the system. This is due to the fact that a value-wise transformation Ti[xi(t)] represent the decision maker's evaluation of a trade-off between units of xi(t) and units of x\(t) and this implies that a caterus paribus condition holds for all other state variables. Thus the transformation functions are really dependent upon the present state, x(0), and it is implicit in their use that all the state variables will have values close to their present values after a decision is made. It should thus be clear that the transformation method may lead to inconsistencies if it is used in connection with significant investment decisions. (For a more detailed discussion see references 4 and 5).

3.1.2. The separation method, or the method of value-wise independence.

Another procedure which is often employed when working with decisions
of a marginal nature is to approximate the utility function by the


DIVL2060

This implicitly assumes that each state variable's contribution to the overall utility at time t is independent of the levels of the other state variables at that time. If such an assumption is appropriate, then the problem of determining the m-f-1 dimensional utility space for any value of t,{x(t),u[x{t) |#(o)]}, is reduced to the problem of determining m 2-dimensional utility spaces, {xi(t),w[xi(t) x(Q)]}, i= 1,. .., m.7).



7) If it is possible to assume value-wise independence, then the task of determining the joint density function dFj [x{t) |x(0)] is reduced to determining the marginal functions dF^ [x((t) \x(o)] and the task of determining the expected utility associated with the ;th course of action is reduced to finding the value of t m /{//.../ 2 UiUiit) | *(0)] dFfi [*f(0 | x(o)]} dt o R i=l m t tn = S{2 fHi [xt(t) I *(0)] dFi} [Xi(t) jx(o)]} dt. 0 j=l iJf

Side 97

However, just as was the case with the assumption of value-wise transformations, the assumption of value-wise independence may be unsuitable for the development: of utility functions in significant planning situations. For example, it may be appropriate to assume that marginal changes in the highway system in a region and marginal changes in the region's industrial and social structure will make contributions to the region's development (utility) which are approximately independent of each other. However, for large changes in these - and other - state variables, there will most certainly be mutual interdependencies, and a significant change in the highway system may not contribute much to the region if there is not a corresponding industrial and social development which can effectively utilize the system. (See references 4 and 5).

3.1.3. The combination of the transformation and separation methods.

If it is assumed that value-wise transformations exist and that the contribution to utility from each of the state variables is independent of the levels of the other state variables, and if xi is the common unit of measure, then the utility function has the form:


DIVL2075

It should be clear (and can be proved) that the combination of both of these assumptions implies that the utility for x\{t) is linear in x\(t). Few decision makers would be willing to accept the proposition that their utility for money is linear, particularly when large sums are involved. Yet, when they accept a problem formulation where all the state variables are transformed into equivalent costs and revenues, and the optimal decision is chosen as that which maximizes the expected gain, then they have implicitly accepted the proposition that their utility for money is linear.

Let us now examine some of the implications of this with respect to two traditional guides to investment decisions, cost-benefit analysis and return on investment. Cost-benefit analysis implies a linear utility for money as the typical cost-benefit analysis first translates all benefits into appropriate monetary terms, then determines the costs involved in obtaining these benefits, and by comparing the transformed benefits and the costs provides a measure of the desirability of the project. Similarly, the return on investment approach implies the linear utility function. The future cash flows are transformed into units of present

Side 98

value via a discounting procedure. The net time-adjusted cash flows are
then compared with the investment and this comparison serves as the
basis for evaluating the investment opportunity.

It appears then that we may not be consistent if we deny the linearity of our utility for money when large sums of money are involved and if we at the same time employ cost benefit analysis or return on investment as the basis for evaluating significant investment opportunities. For a more detailed discussion of this point, see reference 6.

4. How can we improve our ability to solve "significant" planning problems?

We saw in section 3 that the usual means of developing utility functions may lead to decision making which is quite out of line with the actual wishes of the decision maker faced with making a significant decision. This may lead to undesirable structural changes in the system. How might we improve our ability to develop utility functions which are consistent with our actual preferences?

One seemingly logical answer would appear to be to develop more accurate and complex utility functions8). However, it should be emphasized that a decision maker's ability to interpret and express his preferences with respect to the states of the system is quite limited due to both his lack of experience with the possible states which might result from his decision and due to his inability to account for the technological (and social) changes which may take place before the plannig period is completed.

Then what are we to do since it appears that using neither "simple"
nor "complex" utility functions appear to be consistent with our goal



8) Another seemingly logical approach, often referred to as "satisficing", might be to evaluate alternatives according to their ability to satisfy certain goals (e. g. increase sales by 30 % over the next 5 years). The goals are not considered as a measure of performance (or at most as a very crude measure, since either a goal is met or it is not met). Rather they are regarded as a justifiable expression of the decision maker's preferences. These goals define an acceptable region of outcomes and the planning problem is considered to be to find a course of action which will result in an acceptable result. A utility-wise interpretation of such an approach to planning is that all states within the acceptable region have identical utilities associated with them, while states outside the acceptable region have infinitely negative utilities associated with them. Thus this approach too, although conceptually simple, does not appear to aford a sound philosophical basis for planning decisions which is consistent with our aim of making optimal decisions.

Side 99

of making optimal decisions? Are we limited to making significant
decisions based either upon analytical functions in which we have little
faith or upon unaided experience and intuition?

I would like to argue that this is not the case, and to suggest a procedure which can be employed to improve our ability to be consistent when evaluating alternative courses of action for significant planning problems and which relies upon a close team work between the decision maker, the operations researcher arid a computer. The heuristic procedure is as follows: a) start to formulate the problem using a simple form of utility function; b) determine the "optimal" action by following the procedure of section 2; c) using this decision and the joint density function which determines the probability that that course of action will result in any state, simulate the effect of the decision on the system; d) have the decision maker locate the undesirable results which occur and perseribe corrective action in the form of new operating and for planning decisions; e) based up en the decision maker's prescription of corrective action, modify the utility function (and perhaps the set of restrictions); f) then determine the new "optimal" action and proceed to iterate in this manner until modifying the utility function does not result in a new "optimal" action. Proceeding in this manner the utility function would be modified in such a way that it could be accepted by the decision maker as a reasonable expression of his preferences with respect to the state of the system and which, as best as can be predicted, will not lead to undesirable structural changes. At the same time, valuable information on the sensitivity of the optimal action to changes in the utility function will be provided. This approach would harness both the analytical talents of the operations researcher and the practical experience of the decision maker and would, via the simulation, provide a means of experimentation which ordinarily is lacking when evaluating planning decision. At the same time, the simulation procedure would be more efficient than a simulation of all the possible actions since only those decisions which appear to be optimal would be simulated. Since simulation of very complex investment decisions can require huge computer runs, this benefit is not to be overlooked.

In conclusion, it would appear that this continual interplay between analytic formulation, choice of "optimal" action, simulation, evaluation of results, modification of the utility function, new choice, etc. etc. might provide an effective heuristic procedure for both developing appropriate utility functions and for finding the optimal course of action when dealing with significant planning problems.

5. References.

1) Bellman, R., Adaptive Control Processes, Princeton University Press,
1961.

2) Brown, R. G., Statistical Forecasting for Inventory Control, McGraw-
Hill, 1959.

3) Brown, R. G., Smoothing, Forecasting and Prediction, Prentice-Hall,
1963.

4) Pruzan, P. M., and Jackson, J. T. R., "On the Development of
Utility Spaces for Multi-Goal Systems", Erhvervsøkonomisk Tidsskrift,
4, 257-275, 1963.

5) Pruzan, P. M., "Beslutningsteori: Teorien for optimal beslutning i
menneske-maskin systemer", Ingeniøren, 5, 191-197, 1965.

6) Pruzan, P. M., "Is Cost-Benefit Analysis Consistent with the Maximization of Expected Utility?"; in Operational Research and the Social Sciences, J. R. Lawrence (ed.), Tavistock Publications Ltd., to be published in fall 1966.