Shelves of Technical Books

Bibliography Readings/Notes
Scrap Heap: Abstracting Algorithms from Programs

orcmid.github.io>
bib>2001>

r010101a>
0.1.0 2023-10-23 16:27


Sedgewick, Robert Algorithms.  Second edition.  Addison-Wesley (Reading, MA: 1983, 1988).  1989 reprint with authors corrections.  ISBN 0-201-06673-4.
     This book presents an interesting challenge.  It talks about algorithms but it does not show any algorithms, nor does it define algorithm as anything more than a "problem-solving method suitable for implementation as computer programs. [p.4]"  Instead, it exhibits programs which are the implementations of algorithms and discusses them as if the algorithm is apparent.  The reader is left with the challenge of learning to discriminate between what is essential about an algorithm, and how to preserve that in an implementation, versus what is inessential to the algorithm and introduced on account of the implementation and the use of particular programming tools.  I am concerned that this approach, while well-motivated, is not successful.  My evidence is in the criticisms of this and later editions that dwell on the choice of programming language and on stylistic matters in the use of the chosen language.  This places too much emphasis on code.   Although code rules these days, I remain unconvinced that this simplification is a good thing.  For me, one of the great insights in development of software is identification of layers of abstraction for conquering the organization of complex application programs.  Separating design, algorithm and implementation is a critical first step toward that mastery. 
     Meanwhile, Algorithms serves up a handy set of recipes for a variety of basic computing situations.  The 45 sections cover fundamental methods of widespread application in computing and software development. The presentations are straightforward and illuminating.  The compilation bears re-examination every time one sits down to identify key methods for a new application.  I recommend supplementing this material with the practical methods of Software Tools and the introspective explorations of Programming Pearls.  Most of all I encourage development of enough sense of the material in The Art of Computer Programming to be able to read the discussions of algorithms there, even if you never use the particular implementations.

    A version of this précis is published on amazon.com.

THIS IS A SNAPSHOT OF WORK-IN-PROGRESS WORK, A COPY OF r010101. IT IS A SOURCE OF SCRAPS FOR BRANCHING TO OTHER COVERAGE There is no further development on this note.  Continuation of this inquiry is in the following and related materials:


This is becoming too long and wandering quite a ways beyond the original inspiration in Algorithms.  The article will be separated as an independent work and this commentary will simply link to it.  The article will still be on Do Programs Teach Algorithms, but the longer, detailed excursions will be pulled into articles of their own (such as Homage to the Bit and others being worked up as part of the Miser Project.)

Summary

1. Implementations of Algorithms
1.1 A Binary Search Implementation
1.2 A Binary Search Algorithm
1.3 Dancing Between Algorithm and Implementation
 
2. Algorithmic Baggage
 
3. Abstracting Algorithms
 
4. Separating Algorithm, Implementation, and Design

References


Summary

What is the relationship between algorithms and their implementations that is important to the mastery of computer programming?  Is it foundational merely for computer science or does it inform the development of all successful computing practice?  Is there a firm distinction or is it conceptual and qualitative?  

Reviewing Sedgewick's approach leads me to explore the separateness of algorithms more closely.  I conclude that writing down an algorithm in any form commits one to a framework, like it or not.  Formalism is like that.  

Even so, I maintain, differentiating statements of algorithms from programs is indispensable.  Abstracting algorithms encourages students and practitioners to develop their ability to identify, operate with, and interpret levels of abstractions.  It is at the heart of computing.

1. Implementations of Algorithms

Every time we look at the implementation of an algorithm, we see more than the algorithm.  An implementation always adds more detail than what is essential to the algorithm.  An implementation is also likely to take something away from the algorithm.

1.1 A Binary Search Implementation

There is an useful example on the Miser Project page of  notations for code and data: two implementations of the same algorithm are presented.  One implementation is in Python, the other is in C Language.  The implementations are clearly different, far more than the superficial similarity of language constructs reveals.  But the algorithm is the same.  It is a flavor of Donald E. Knuth's Algorithm 6.2.1B.

Here is the more-general implementation, the one in Python:

def BinSearch(List, item):
    "return the index of item in the already-sorted List"
    "return -1 when item is not in List"
    "based on Donald E. Knuth's Algorithm 6.2.1B"
    "  cleaned up to work with 0-origin lists, tuples, and indexes"
    L = 0                    # Current left-most position
    R = len(List)            # Current right boundary
    while L < R:
        j = (L + R) / 2      # A place to try in the 
        if List[j] == item:  #   candidate interval
            return j
        if List[j] < item:   # When no match yet, close off
            L = j + 1        #   the area where item can't be
        else:                #   and continue in the interval
            R = j            #   that remains
    return -1                # Return impossible index if no match

 This version is similar to a statement of the essential algorithm, yet it does not provide a precise statement of the algorithm.  Far from it.  

Things to notice:

1.2 A Binary Search Algorithm

To see the difference between the statement of an algorithm for binary search and the BinSearch implementation, we will use the style of algorithm presentation introduced by Donald Knuth[Knuth97: 1.1]:

Algorithm Bin (Binary search).  Given a list of items K0, K1, ..., KN-1
in non-decreasing order K0 < K1 < ··· < KN-1
this algorithm finds a position, if any, having a given item, K.

Bin1. [Initialize.]
Set L to 0; set R to N, the number of items.
(If there are no items, N is 0.)
 
Bin2. [No Luck?]
Let Rem = R - L, the number of eligible items remaining.
If Rem = 0, terminate the search as unsuccessful.
(Each time we return to Bin2, Rem is smaller and we must eventually find a matching item or terminate with all possibilities eliminated.  See Bin4 and Bin5 to confirm that.)
 
Bin3. [Take a Look.]
(L < R and if K is among the eligible items then KL < K < KR-1.)
Set j to floor((L+R)/2).
(j is always an index within one place of the middle of the interval L through R-1.  L < j < R-1 always.)
If Kj < K, go to Bin4.  If K < Kj, go to Bin5.
Otherwise, the search is successful and Kj is a matching item.
 
Bin4. [Look Higher Next.]
(If K is among the eligible items, then Kj+1 < K < KR-1.)
Set L to j+1.  Go to Bin2.
(0 < R - L < Rem.  We have eliminated at least half of the current items as ineligible.)
 
Bin5. [Look Lower Next.]
(If K is among the eligible items, then KL < K < Kj-1.)
Set R to j.  Go to Bin2.
(0 < R - L < Rem.  We have eliminated at least half of the current items as ineligible.)

The statement of the algorithm relies on a supposedly pre-existing framework.  We will discuss that more in section 2.  For now, it is valuable to notice that there is an implicit framework and perhaps ponder (1) how much of it we take for granted in reading the algorithm and (2) how much the author of this description takes for granted about what needs to be said and what it takes to understand it.

This statement of the algorithm is contrived so that someone skilled in the analysis of algorithms can independently confirm some important characteristics:

Also, notice that the annotations made as parenthetical comments are not strictly part of the statement of the algorithm.  The annotations provide assertions about the algorithm that are important in comprehending the achievement of its function and of its efficacy.  The annotations also support confirmation that the recipe given is indeed that of an algorithm for the identified function.

We are looking here at a few ideas that might be worth developing separately: 
     1. That the statement of an algorithm has a certain looseness to it that is important.  Yet it is easy to reach agreement and demonstration that there is an interpretation of the statement in an implementation that preserves the essentials of the algorithm.
     2. That implementations can be viewed as interpretations of (formal or informal) algorithms
     3. That the annotations are indispensible to the statement of the algorithm and it being seen as satisfying the requirements for for accomplishment of the stated purpose with an algorithmic solution.

It will probably be valuable to identify what the technical conditions on algorithms are and how they are satisfied with this particular example.  I don't want to do that here.  Perhaps in a companion article, or a brief sidebar.

At some point, we want to nail down this way of speaking about the statement of an algorithm, versus the algorithm, versus an implementation of the algorithm in a computer program.  We want to establish that, whatever the difficulties that even the statement of an algorithm introduces on "grokking" the algorithm, it is valuable to take that route. (2001-02-20)

Someplace we might talk about focal attention versus subordinate intention, the distinctions that Polyani introduced concerning language.  That and how statements of algorithms are methods and there is a relationship to behavior that always has the algorithm be "off the paper" and yet something we might see ourselves having our focal attention on when we examine the statement of the algorithm.  (2001-02-20) 

1.3 Dancing Between Algorithm and Implementation

When I am working with statements of algorithms and implementations, I toggle back and forth between them.  Experiences in developing the implementation suggest refinements -- usually simplifications -- to the algorithm and its statement that are then reflected by economies in the implementation.  The process can continue until I am satisfied with the result.  

When I developed BinSearch, I started by consulting Knuth's statement of Algorithm 6.2.1B.  I immediately introduced transformations that allowed my preferred style to be expressed in BinSearch,.  I satisfied myself that none of the transformations sacrificed anything that is essential to preserving the algorithm, its correctness, or its performance.  

I did all of this informally, the objective at the time being providing an useful worked example of notational conventions and their application, not demonstrating algorithmic analysis.

In questioning whether programs teach algorithms, I took a fresh look.  Although I again consulted Algorithm 6.2.1B, this time I created Algorithm Bin by re-abstraction from BinSearch.  I Also restated it in a way where I could demonstrate that I had captured everything that there is in Algorithm 6.2.1B, and more.

Discuss how the statement of Algorithm Bin has a pleasant separation of concerns.  It is abstracted completely away from any application.  Contrast this with Knuth's statement, and also with Sedgewick's.  This is an useful illustration in the abstraction of design from algorithm and of algorithm from implementation.  (Both Knuth and Sedgewich cast the algorithm in the context of a wider purpose.  We can reflect that in the discussion of design vs. algorithm.)  2001-02-20.


     Talk about why I don't go the final step and use
while (L != R):, leaving the safety cord attached even when it is demonstrably unnecessary.
     In creating Algorithm Bin as a re-abstraction of
BinSearch, it is not like I walked up with a blank mind and then figured out an abstraction that gave me an algorithm I could use.  I was certainly aware of the function of BinSearch and what it was intended to accomplish by me, its author.

We can consider decorating code with information about the algorithm.  This might be a form of tangling that has the algorithm carried with the code so that someone can inspect it as a whole.  
     The value of that is that the code is tied to the algorithmic statement that it provides a valid interpretation of.  It puts rigor around what the intended function is, and the only knowledge of the implementation that it is safe for an user of the implementation to make (assuming that one might want to substitute a different implementation that preserves the agreed interface and the algorithm).
     If I were to do that, I would want a very clear notation for distinguishing discussion about the implementation from the annotations that tie the implementation to the algorithm.  This might make for some wordy code.  It would certainly have a valuable self-contained property to it.  It is apparently not what most programmers want to look at.
     I would also want a way of tying implementation conditions and restraints, so that the mapping of the algorithm is very clear, and the restraints of the implementation are also separately very clear.  I am thinking of constraints like Bin's L+R being a defined quantity in the operational system of the implementation.
     If I attempt this, it will be by coming up with a new version of
BinSearch here.

2. Algorithmic Baggage

Point out that we pretty much always say more than is essential.  And it is still important to have a form of expression that has it be understood that the algorithm is definitely and always more abstract than the implementation.
     Every expression of an algorithm has baggage.  This includes assumptions about the existence of orderings, the assumption of 0-origin indexing, the use of natural numbers for indexes, and the stability of the objects involved.  That the algorithm is effectively carried out atomically, while itself being composed of definite steps, is a key assumption in almost all expression of algorithms.  This is part of the framework.
     Then there is the simple use of ordinary language along with borrowing of mathematical objects and notations, all more framework for expressing algorithms.  
     The value of using ordinary language and distinct notations from that for program code must not be underestimated.  It gives us a way of distinguishing the two, and of being able to establish what is essential and what is incidental (but perhaps inescapable baggage) of the given implementation.  We can also begin to separate the baggage of the algorithmic framework and the baggage of the implementation system.
     It is important that we appreciate the value of having more than one perspective on this.  It gives us a place to separate problem from solution, and be clear which is a statement of what.  the algorithm-implementation separation contributes to the separation of the abstraction from the statements of it.  

3. Abstracting Algorithms

Consider that the level of abstraction for statements of algorithms might need to be improved.  
     One way is to notice that the objects appealed to in the statement of the algorithm might be at the wrong level.  A large part of the baggage in our example is related to the level at which we deal with ordered lists via indexes.  There might be a way to change the level such that we address lists in other terms that are somehow liberated from the baggage about indexes and their origins.  It might be a quite different algorithm, assuming quite different basic operations, and we won't go there.
     Another approach is to make reliance on abstract elements explicit rather than tacit.  What we might get to is a form of expression of algorithms that can be taken as concrete, given a representation of the abstract elements.  

     We are always going to have algorithmic baggage.  Can we arrange baggage in such a way that the baggage is understandable and the abstracted algorithm is pretty clear?  We take a look at a toy case using an abstract structure for the bit.
     This might be better described as concretizing abstractions.  That is, we want to make the subject-matter of the algorithm, the structures that are understood, explicitly abstracted so that we can see what the dependencies are.  To do this, we still need a framework, but one that makes explicit that which was originally tacit.  (2001-02-20)

The bit abstraction began as an exercise early in the original Miser Project when I began to look at how to abstract out data types from algorithms, and look at the kind of automatic program transformations that become available as a consequence.  The title for this is Homage to the Bit, and I fancy publishing it in BIT, on some anniversary occasion.  Here is a sketch, with the full treatment better done in its own paper.  I will put enough in this article to expand on the problem of baggage and what one must deal with to "deliver abstractions" into the concrete world of a programming system.
     1. Consider that we arrange a way for data types to always be typical of their class.  We won't expand on that here, just introduce it as the adopted approach.  We want to introduce concrete data types that have certain qualities that preserve abstractness in some useful and extremely practical sense.  In particular, the implementations are fully substitutable, and this can be done selectively and anywhere in a computational method.  We illustrate this idea by presenting a "concrete abstraction" for one of the most fundamental data types of all, the binary bit.
     2. For example, when b is an 01bit (read "oh-one-bit") then b.0 is the 0-bit and b.1 is the 1-bit.  If we have an 01bit, we can get any 01bit we want.  Every instance is typical of the class.  
     3. Notice how much framework we are heaping on this very simple notational invention.  I mean some 01bit, b, and I specify features of 01bits by using the .-notation.  That is, the 0 feature, b.0, is the 0-bit of 01bits, and the 1 feature, b.1, is the 1-bit of 01bits. 
     4. Don't worry about this too much.  This is not an ordinary way of speaking, and the notation is used in a critical way.  Just look and notice how we are using ordinary language to talk about a concrete (implemented) thing, and what it takes for us to be rigorous in doing that.  Also notice that almost everything is in our (framework) language.  What makes these concrete abstractions, in our way of speaking here, is that we have a rigorous (formal) framework language that allows us to say all there is that can be discerned about some things.  They have some concrete existence, lets suppose, but we are only able to "see" them as if there is an actual abstraction there.  That is, the framework guarantees that we don't get to see (in the framework context) anything more than the abstraction.  Now, notice how this is not entirely true.  For one thing, the idea of bits is extremely simple.  What we have here is a pretty elaborate framework so we can talk about them as if they are something that can be talked about!  Basically, 01bits are concrete abstractions that have two distinct individuals.  The one we call the 0-bit is designated 01bit.0.  The one we call the 1-bit is designated 01bit.1.  And the way that we have of telling whether the one we have in hand, b, is the 0-bit or not is by the designation b.is-0.  Notice that almost everything in the framework is about how we are going to designate individuals and tell which one is at hand, so to speak.  Our biggest job is agreement on language and terminology.  (2001-02-21)
     5. Even at this point, you don't have any assurance that 01bits have anything to do with our everyday notion of bits other than I have said so.  That is, there are 0-bits, 1-bits, and they are different.  That's about all there is to bits.  It doesn't even matter how I pick one 01bit individual to be the 0-bit and the other to be the 1-bit, so long as I keep it all straight.  This is important: there are syntactic features that allow us to converse about bits and think we know what we are talking about.  It happens that the choice of feature is arbitrary, although it must be fixed.  (The notion of the dual of Boolean algebras lurks in here.  We'll not go into that here.)  
     6. I am saying that to come up with a concrete abstraction, we need to fix the features and their expression.  The labels don't matter, the discrimination among the features and any specific behavioral conditions are what matter.  So much baggage.  So few bits.
     7. When we have an 01bit, how do we know which bit we have?  We have to take a position about that to be able to operate with these objects.  We could take a common approach and introduce a proposition that applies to all bits: b.is-0.  The framework assumption here is that we have some notion of truth or, at the very least, are able to deal conditionally with the answers to questions.  This is pretty fundamental for algorithmic frameworks, so we'll go with b.is-0.  
     8. There is a metaquestion here too.  How do I know that I have an 01bit at hand?  Hold that thought.  (2001-02-20)
     9. b.is-0 works for 01bits because in our ordinary notion of binary bits, a given bit is either the 0-bit or it isn't, and if it isn't it must be the 1-bit.  The use of a specific proposition is a little inconvenient when generalizing to more-interesting finite sets, and we will ultimately prefer to introduce some equality determination for members of abstractions like 01bits.  Either way, we are making even more framework assumptions.  For now, b.is-0 works just dandy.

Now, if I presented you with a programming system's fixed data type, 01bit,  and said these are (representations of) the binary bits and here's a notation for specifying them and examining them, I wouldn't have done much.  
     1. I would have presented you with an implementation that has an intended interpretation as binary bits.  It might even have the intended pragmatic interpretation of corresponding to the binary bits of an underlying computational mechanism with the assurance that these bits correspond to the bits in the memory system of that mechanism. 
     2. I want to do something different than that.  The 01bits are intended to be concrete abstractions, and we don't want to lose the abstractness of it.  There might be some flavor of 01bit that has all of that connection to a predictable and known implementation.  But that is not to be assumed of 01bits generally.
     3. Here is a Python-flavored function using 01bits:

          def 01and(a: 01bit, b: 01bit) 01bit:
               if a.is-0:
                    return a
               else:
                    return
b

Although I would never expect to be able to say this in Python (it not being in the spirit of the language), I am using this form for contrast and to have an existing framework.  This is also an imperfect model, so take it as merely exemplary.
    4. What I want to suggest is programming-system machinery where the 01and function is understood as the expression of an algorithm as much as that is possible in a concrete programming system.  In particular, this definition does not depend on a fixed sense of 01bit, but simply requires delivery of 01bits -- parameters that have the features of 01bits and that satisfy whatever conditions exist for a concrete flavor to satisfy the abstraction.
    5. It might be more general to suggest the following notation, not unlike the template notion of some languages:

          def 01and(bit: 01bit)(a: bit, b: bit) bit:
              if a.is-0:
                    return a
               else:
                    return
b

This might provide a stronger sense of what is intended.  I am concerned that this implies too much and I am not wedded to this approach.  So far.
     6. I haven't said how one defines 01bit in this kind of system.  I'm not prepared to make a commitment to that, which is my nice way of saying I haven't figured it out yet.
     7. What you can count on is that if any a and b are supplied that do not satisfy the conditions for the concrete abstraction, bit, then the function fails.  It doesn't return a result, it fails.  Furthermore, if the implementation of the function returns a result, it is assured to satisfy the concrete abstraction, bit.
     8. Given this function, 01and, assume that the programming system allows ways that one can cast an existing object that has the necessary qualities of a bit as an 01bit.  That is, one does not have to be content with arranging things to be 01bits before they are useable in operations defined on 01bits.  We could instead define an 01bit over some other type, lets say Boolean,  in which we specify how the features of an 01bit are derived from the features of a Boolean.  This gives an interpretation of a Boolean as an 01bit.  Generally, there are many ways to do this, so we might want to name particular interpretations or use a descriptive notation in which we show the particular interpretation.  Suppose we use the name Boolean01bit for the interpretation that has true be the 1-bit and false be the 0-bit.  Then Boolean01bit(Boolean.true) is 01bit.1.  (2001-02-20)
     9. There's something I want to add here, but I have forgotten it for the moment. It is about how all we are talking about is the features of some hidden thing, a concrete thing that we only want to see in accordance with an abstraction that it supports.  We are, let us say, using these concrete abstractions as attributive views onto something.  The something is the concrete part.  The abstraction is what we have when we only allow those particular attributions to be seen as the features.  The validity of the concrete abstraction is that its behavior never contradicts the abstraction.  In computing terms, we would talk about hiding an implementation.  In logic, we would say that we have a valid interpretation of the abstraction.  (2001-02-21)
     10. There can be ways to cast values from 01bit to Boolean, and this means that the result of 01and can be made into a Boolean without much difficulty.  For example, 01bitBoolean(01bit.1) could be Boolean.true.
    11. I am messing with the typography here.  I am not quite prepared to concede that Boolean01bit and 01bitBoolean are regular functions (with regard to whatever functions are in the framework), so I have used special typograhy in their names.  
    12. It can appear, at this point, that we haven't done anything.  That is, We have made explicit a way to use 01bits as Booleans and Booleans as 01bits.  Aren't we just stating the obvious?  Exactly so.  We have made explicit what, in some frameworks, is not only taken as tacit, but it might not even be recognized that such is being done -- that there is any other way to look at it.  Until I told you that the definitions of Boolean01bit and 01bitBoolean were the obvious ones, I say it wasn't obvious yet.  One could jump to those conclusions, but there is no basis in anything said up to this point.  (2001-02-20)
     13. Here's the next piece of information.  All there is to know about 01bits is what you've been told here.  That is, the only known features are 01bit.0, 01bit.1, and 01bit.is-0.  All that is implied for Booleans, so far, is that there are Boolean.true and Boolean.false.  One might suppose that there is also a Boolean.is-false, in which case our suspicions about there being no difference here, other than a choice of labels, are difficult to resist.  However, that there is such a prospect is different than it being so.  And we have not said whether there are only these features of Booleans.  In fact, I assert that there is no Boolean.is-false as part of the definition for Booleans.  There is, however, a Boolean.is-true.  (Notice that we still haven't said what kind of object is determined by the 01bit.is-0 and Boolean.is-true features.) (2001-02-20)
     14. Here's a little more.  Let N be a concrete abstraction for the natural numbers (or at least the Peano numbers).  Feature N.0 is the zero or origin of the naturals, N.s is the successor of whatever natural number that an N represents, and N.p is the predecessor of whatever natural number an N represents.  It happens that N.0.p is always N.0 in this formulation.  Finally, N.is-0 is defined analogously to 01bit.is-0.  That is all there is to know about N's.  It is useful to introduce BooleanN so that BooleanN(N.0) is Boolean.false and BooleanN(N.s) is Boolean.true.  Likewise, NBoolean(Boolean.false) is N.0 and NBoolean(Boolean.true) is N.0.s.  (2001-02-20)

     1. It may seem that Booleans and 01bits are not much different and they could be defined to be the same or equivalent in some way.  While it might be useful to do that, I am intentionally avoiding doing so here.  Where we to do that, there is still no such obvious equivalence between Booleans and N's, even though the interpretations BooleanN and NBoolean are very useful in practice (and are tacitly employed in the C programming language and its relatives.)
     2. I am going to make a framework commitment now.  Suppose that features 01bit.is-0, Boolean.is-true, and N.is-0 are all Booleans.  I immediately no longer require Boolean.is-true and one might plausibly claim that Booleans are now more primitive and essential than 01bits.  The algorithmic framework inherently depends on Booleans.  It is what we use to be specific about having the answer to a question and being able to operate conditionally with that answer.
     3. One can argue that ensnarling the framework with a commitment to a particular concrete abstraction for choice is unnecessary and also inconsistent with the idea of carefully managing the choice of abstractions.  It is the case that the statement of algorithms depends almost always on having a way to distinguish particular objects and vary the progression of operations based on that.   But this need not be the best way to deal with it.  That is not important for this discussion.  The point here is how things can fall into place when one makes such a commitment.   It is also the case that there is an underlying commitment that becomes invisible while shaping the statement of algorithms.  (2001-02-20)
     4. In thinking about whether we collapse the truth system of the framework with Boolean or not, I realize there is a similar concern with regard to explicit identity.  I defined these concrete abstractions using is- features, avoiding having an explicit identity (and the messiness of binary operations, which I didn't want to drag in).  On the other hand, natural formulations of axiomatic theories for structures Boolean, 01bit, and N, all appeal to some identity function.  I could look to see how to avoid use of identity there, but I don't see how to avoid it, especially for N.  I am becoming more inclined not to collapse any framework structure with these (supposedly) concrete structures.  I will also have to cure myself of the idea of concrete abstractions, finding better terminology.  (2001-02-21)

What's the point of all of this?  
     1. First, making abstractions concrete (i.e., executable or performable) does not free us from baggage.  
     2. Secondly, making the baggage explicit leads to something very unfamiliar, though potentially very useful.    One useful consequence is that we need not use a concrete abstraction any more powerful than the situation requires.  When all that is needed is some sort of Boolean or 01bit, we can use just that.  When all we need to do is count up and down, N's can be the most direct.  This lets us get closer to what's essential to the situation, at the expense of being much more rigorous about the features that we depend upon.  
     3. It is not clear that combining all of this in one system is an easy way to get to programming or algorithms.  
     4. This approach does point out how much about computation is about choosing a data representation.  This is one of those aspects of software development that is done unconsciously, and putting attention on this via discussion of concrete abstractions (i.e., highly-disciplined data representations that have pretensions to delivery of abstractions) is an useful investigation.
     5. I wouldn't want to give up on separation of design, algorithm, and implementation in favor of machinery like this.  I don't think machinery like this is very understandable until one has gone through practical experiences that foster differentiation and abstraction of design, algorithm, and implementation.

4. Separating Algorithm, Implementation, and Design

We then look at what may well be the likely case in a practical situation.
     1. We confess that we came in with a point of view about conceptual abstractions, and make the best of it.
     2. We look at the motivation for doing this, and why we don't want to collapse algorithms and implementations together.  Part of it has to do with coming to grips with what is essential.  We have a practical example of what is available when we bother to do that.  (I have another example in the Dr. Dobb's paper on integer powers.)
     3. There is also an example in why we often do applied mathematics the way we do, whether balancing a checkbook or building a table or playing pool or pitching at baseball, when we bother to look at these activities that way.

There is an example of design versus algorithm in what we want to represent versus how we are representing it (algorithmically).  Also, the combination of algorithms that go into the design of something can have algorithms at a quite different level of abstraction than that of the design.

References

[ACP 6.2.1B]
Knuth, Donald E.  Binary Search.  Algorithm 6.2.1B on p. 410 of The Art of Computer Programming, v.3: Sorting and Searching.  ed.2.  Addison-Wesley Longman (Reading, MA: 1998).  ISBN 0-201-89685-0.       
     BinSearch
is derived from the form given by Knuth in Scientific American 236, 4 (April 1977), as reprinted on pp.66-67 of Selected Papers on Computer Science, CSLI (Palo Alto: 1996),  ISBN 1-881526-91-7 pbk.
     This BinSearch is adjusted to avoid wondering what m/2 might be for integer m<0, while preserving a clean terminating condition and loop invariant.  It is required that (2*i+1)/2 be i for non-negative i.
 
[Knuth1997]
Knuth, Donald E.  The Art of Computer Programming, vol.1: Fundamental Algorithms. ed.3.  Addison Wesley Longman (Reading, MA: 1997).  ISBN 0-201-89683-4.
 
[Pólya1957]
Pólya, George.  How to Solve It.  ed.2.  Princeton University Press (Princeton, NJ: 1945, 1957).  ISBN 0-691-08097-6.
 
[Sedgewick1989]
Sedgewick, Robert.  Algorithms.  Second edition.  Addison-Wesley (Reading, MA: 1983, 1988).  1989 reprint with authors corrections.  ISBN 0-201-06673-4.

History

0.1.0 2023-10-10T16:58Z Initial adjustment for repurposing at orcmid.github.io
0.15 2001-02-23 Split off as r10101a to preserve the raw notes (orcmid).
This version is the original 0.10 renamed and titled.  The intention is to leave it alone and continue expansion of the themes here in separate notes of the Miser Project.  
0.10 2001-02-23 Prepare r010101 for division into separate articles (orcmid).
This note becomes the common basis for several articles that address the underlying themes more carefully.
0.00 2001-01-04 Initiate "Do Programs Teach Algorithms?" (orcmid)
On re-examinination of  Sedgwick's Algorithms, I noticed that there seems to be a collapse of algorithms with their implementations, and that algorithms are not directly addressed.  This reminded me of earlier work, "Programs Are Rarely Algorithms" that I'd discussed with Bill Anderson in the 80's.  It seemed important to speak up about it here, especially in light of my inquiry into "concrete abstractions."

 

Construction Structure (Hard Hat Area) You are navigating Orcmid on GitHub

created 2001-01-04-12:40 -0800  (pst) by orcmid