|
![]() ![]() |
Example: Seminaive computation of the ancestors of marc
delta_anc(0, X, Y) <- parent(X,Y). delta_anc(J+1, X, Z) <- delta_anc(J, X, Y), parent(Y, Z), ~all_anc(J, X, Z). all_anc(J+1, X, Y) <- all_anc(J, X, Y). all_anc(J, X, Y) <- delta_anc(J, X, Y).The second rule and third rule in the example are Y-rules: a Y-rule is characterized by te fact that the temporal argument in the head is J+1, while J appears as the temporal argument in the goal of the rule. This leads to the natural interpretation that J+1 denotes the "new" values, whereas J denotes the "old" values from the last state.
A rule such as the last rule in the example above is an X-rule: an X-rule is one where all the temporal arguments in the rule are identical, e.g., in last rule above they all coincide with J. Thus, a a possible interpretation, of this rule, is that the "old" values of all_anc are derived from the "old" values of delta_anc. But, if we replace the temporal argument J by I+1, everywhere in the rule, this states that the "new" values of all_anc are derived from the "new" values of delta_anc. This second view of X-rules should be used since it leads to the simple view that the "new" values in the head are being computed using a mixture of old and new values from the body. Therefore, rules, 1-4, in the previous example can be interpreted as follows:
XY-stratification is very powerful and conducive to the elegant formulation of classical graph-oriented algorithms, that use aggregates in their computation. For instance, the computation of Floyd's Algorithm to compute the shortest distances between nodes in a connected graph can be expressed by the following program.
Example: Shorters Distance in a graph g(X,Y,C) denotes an arc from X to Y of cost C.
delta(0, X, X, 0) <- g(X,_,_). delta(0, Y, Y, 0) <- g(_,Y,_). delta(J+1, X, Z, min<C>) <- delta(J,X,Y,C1), g(Y,Z,C2), C=C1+C2, if (all(J,X,Z,C3) then C3>C). all(J+1, X, Z, C) <- all(J,X,Z,C), ~delta(J+1, X, Z, _). all(J, X, Z, C) <- delta(J, X, Z, C).Floyd's algorithm provides a more efficient computation for the same problem.
Example: Floyd's algorithm: g(X,Y,C) denotes an arc from X to Y of cost C.
delta(0, X, Y, C) <- g(X,Y,C). new(J+1, X, Z, C) <- delta(J,X,Y,C1), all(Y,Z,C2), C=C1+C2, if (all(J,X,Z,C3) then C3>C). new(J+1, X, Z, C) <- delta(J,X,Y,C1), all(Y,Z,C2), C=C1+C2, if (all(J,X,Z,C3) then C3>C). delta(J, X, Z, min<c>) <- new(J, X, Y, C). all(J+1, X, Z, C) <- all(J,X,Z,C), ~delta(J+1, X, Z, _). all(J, X, Z, C) <- delta(J, X, Z, C).Because of the lack of monotonicity, the usual differential fixpoint optimization cannot be used as such in the bottom-up execution of XY-stratified programs (also, many of these programs express sophisticated algorithms requiring ad-hoc optimization under the direct control of the programmer). However, top-down optimization techniques, such as the magic set optimization and the left/right linear rule optimization, are applied to XY-stratified programs. For instance, if we ask the question delta_anc(_,marc,X) LDL++ will use the specialization approach, whereby the second argument in the recursive predicate is instantiated to marc.
The copy and delete rule optimization, discussed next is unique to XY-stratified programs.
|
![]() ![]() |
all_anc(J+1, X) <- all_anc(J, X), delta_anc(J+1, _).is still a copy rule. In fact, the compiler simply evaluates the delta goal, and if true then the pointer for the "new" version all_anc is set to point to the "old" copy, otherwise it is set to point to the empty set. Consider now the following recursive program that computes the temporal projection by coalescing overlapping periods into maximal periods.
Example. Coaleshing after temporal projection for emp_dep_sal(Eno, D, S, Frm, To)
e_hist(0, Eno, Frm, To) <- emp-dep-sal(Eno, D, S, Frm, To). overlap(J+1, Eno, Frm1, To1, Frm2, To2) <- e_hist(J, Eno, Frm1, To1), e_hist(J, Eno, Frm2, To2), Frm1<=Frm2, Frm2<=To1, distinct(Frm1, To1, Frm2, To2). e_hist(J, Eno, Frm1, To) <- overlap(J, Eno, Frm1, To1, Frm2, To2), select_larger(To1, To2, To). e_hist(J+1, Eno, Frm, To) <- e_hist(J, Eno, Frm, To), ~overlap(J+1, Eno, Frm, To, _, _), ~overlap(J+1, Eno, _, _, Frm, To). %%%%auxiliary predicates%%%%% distinct(Frm1, To1, Frm2, To2)<- To1 ~= To2. distinct(Frm1, To1, Frm2, To2)<- Frm1 ~= Frm2. select_larger(X, Y, X) <- X>=Y. select_larger(X, Y, Y) <- Y>X.The fourth rule above is a delete rule; such rule is defined as follows:
delta_anc(I, X)The LDL++ system uses pipelining in the computation of XY-stratified programs, whereby, a new tuple is only generated when it is immediately needed by calling goal. Thus when no more tuples are requested by the our goal above, there is no longer any computation of the XY-rules. For instance, goals such as delta_anc(20, _) , or all_anc(_ , anode) are basically existential goals, and they never call the XY-rules computation again after they succeed.
For the more general situation, the goal delta_anc(I, X) fails when this delta relation becomes empty for some value of I and the XY-rules are never executed again after that. Therefore, while a straight bottom-up execution of the ancestor program suggest an infinite computation because of copy rules, the computation is in fact safe when the goal delta_anc(I, X) is used (but the computation would not be safe if the goal all_anc(I, X) is used).
In many programs, answers cannot be returned incrementally since only the values at the end of the computation are of interest. In this case, the programmer, should write rules to (i) identify the step at which the computation complete, and (ii) use the values computed at that step. For instance, for the temporal projection example, the followin rules can be used:
lastperiod(I) <- overlap(I,_,_,_,_,_), ~overlap(I+1,_,_,_,_,_). final_e_hist(Eno,Frm,To) <- lastperiod(I), e_hist(I,Eno,Frm,To).Thus, the final state is that for which there is no successor. Thus, a program calling predicates defined in XY-rules can only refer to two successive states (since only two states are stored, the previous states are lost). However, for copy predicates, only one state can be used by external computations (insofar as only one version of the predicate is kept by the system. For Floyd's algorithm we can write:
lastfloyd(I) <- delta(I,_, _, _),~delta(I+1,_,_,_). floyd_results(X,Y,C) <- lastfloyd(I), all(I, X, Y, C).
|
![]() ![]() |
For instance, the following program extracts the least spanning tree, rooted in a node a, from an undirected graph represented by pairs of nodes g(X,Y,C), g(Y,X,C).
Example. Prim's algorithm.
prim(0, nil, a). % nodes solved so far newedgs(I+1, X, Y, C) <- solved(I, X), g(X,Y,C), ~prim(I,_, Y). leastedg(I, minHere the choice construct is used to ensure that only one edge is added at each step (otherwise the resulting graph might not be a tree). Observe that the last rule is a choice rule, satisfying the condition that the temporal variable is in the first argument of choice. The significance of this requirement is that choice can only be used to enforce FD constraints for the values generated in a certain state (i.e., for a particular value of the temporal argument). Choice should not be used used to enforce constraints over the global temporal history.) <- newedgs(I, X, Y, C). prim(I+1, X, Y) <- prim(I, X, Y). prim(I,X, Y) <- leastedg(I, C), newedgs(I,X,Y,C), choice((I), (Y)).
As in SQL, the builtin aggregate min value, but not the edge at which this occur. But LDL++ supports the definition of new aggregates, which can be freely used in XY-stratified programs. Thus Prim's algorithm can also be expressed as follows:
Example. Prim's algorithm using a new aggregate
solved(0, a). % nodes solved so far prim(I+1, aleast<(X, Y, C)>) <- solved(I, X), g(X, Y, C), ~solved(I,Y). solved(I+1, X) <- solved(I, X). solved(I, Y) <- prim(I, (X, Y, C)). single(aleast,(X,Y,C), (X,Y,C)). multi(aleast,(X1,Y1,C1),(X2,Y2,C2),(X2,Y2,C2)) <- C2 < C1. multi(aleast,(X1,Y1,C1),(X2,Y2,C2),(X1,Y1,C1)) <- C2 >= C1.