Computers:
5 posters
Page 8 of 11
Page 8 of 11 • 1, 2, 3 ... 7, 8, 9, 10, 11
Copute's unique feature to maximize modularity and avoid LSP errors
Prior discussion of Gilad Bracha's "Constructors are Harmful":
https://goldwetrust.forumotion.com/t112p135-computers#4191
The following makes some good points, and I applaud the clear explanations of the benefits of the pure lambda calculus.
However, OOP (subtyping) is not arbitrary, because it enables more deterministic partitioning (divide-and-conquer) of the design space, which can aid in the ability to reason about large design spaces (imagine the entire body of software being granularly reusable), as compared to (Haskell's or Scalaz's emulation using implicits) structural subtyping (ad-hoc polymorphism).
I am working on a granular approach to mixing the pure lambda calculus and subtyping.
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/
My followup on my above assertion:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4534
Another of my followups:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
https://goldwetrust.forumotion.com/t112p135-computers#4191
I have refined the solution since I wrote the above. Above I was proposing that static factories in an interface could have their return type parametrized to the implemented subtype class. That does not solve the problem. Static factories in an interface are necessary for other SPOT (single-point-of-truth) and boilerplate elimination reasons, e.g. see my implementation of 'wrap' (a/k/a 'unit') in an IMonad (IApplicative), and notice how much more elegant it is than the equivalent Scalaz. Note SPOT is also a critical requirement for maximizing modularity, i.e. ease of composition and reuse.
Rather to accomplish abstraction of constructors, we need is to nudge programmers to input factory functions, so that any code can be abstracted over another subtype of an 'interface' (i.e. instead of calling a 'class' constuctor directly, then input a factory function which returns the result of calling a constructor, thus the caller can change the subtype being constructed). So the important point is that we want to force programmers to create an 'interface'(s) for all their 'class' methods, which is accomplished by not allowing method implementation (i.e. 'class' nor 'mixin') to be referenced any where except in the 'inherits' declaration and constructor call. This means the type of an instance reference can not contain an identifier which is a 'class' nor 'mixin', thus forcing the type of all instance references to contain identifiers which are an 'interface', i.e. instance references reveal the abstract interface, but do not indicate the implementation.
So Copute will have a crucial difference from Scala and the other contenders (e.g. Ceylon), in that 'interface' and 'mixin' will be separated (not conflated in a 'trait'), and only 'interface' can appear in the type of instance references. Note that in Copute (just like for a 'trait' in Scala) 'mixin' and 'interface' may not have a constructor. Scala's linearised form of multiple inheritance is retained.
Note this unique feature of Copute (along with the elimination of virtual methods, a/k/a 'override') is also necessary to enforce the Liskov Substitution Principle (which relates to the concepts of covariance and contravariance):
(note some of the following specification is out-of-date with current specification in grammar and in my various notes around)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
http://copute.com/dev/docs/Copute/ref/class.html#Static_Duck_Typing
http://copute.com/dev/docs/Copute/ref/function.html#Overloading
The following makes some good points, and I applaud the clear explanations of the benefits of the pure lambda calculus.
However, OOP (subtyping) is not arbitrary, because it enables more deterministic partitioning (divide-and-conquer) of the design space, which can aid in the ability to reason about large design spaces (imagine the entire body of software being granularly reusable), as compared to (Haskell's or Scalaz's emulation using implicits) structural subtyping (ad-hoc polymorphism).
I am working on a granular approach to mixing the pure lambda calculus and subtyping.
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/
On the utility of the FP approach
Referential transparency buys you modularity (the extent to which components of a program can be separated and recombined) and compositionality (the extent to which the whole program can be understood by understanding the components and the rules used to combine them).
Type systems also get more sophisticated to the extent that programs are pure. It’s easier to infer types for pure programs than it is for impure ones, for example. You also gain things like type constructor polymorphism (a.k.a. higher-kinded types) and higher-rank types. It’s my experience that in a sufficiently sophisticated program (such as a compiler for a programming language), you either use these abstractions or you repeat yourself.
Sophisticated type systems then in turn aid in program inference, which is the extent to which the computer (or the programmer with the aid of the computer) can infer the correct program from type information (think tab completion, only moreso). Program inference is currently an active area of research.
As a side-note a lot of OO folks are discovering the functional approach as a tool to aid in modular design. They call it “dependency injection”, “inversion of control”, “interpreter pattern” and the like.
A word on OO and the arbitrary
In OO, as commonly practiced, the choice of distinguished argument is arbitrary. Consider this function:
- Code:
K(x, y) = x
If we were to write this in OO style, then on which object, x or y, should the function K be dispatched? Should it be x.K(y), or y.K(x)? It’s arbitrary.
On “types of languages”
I want to get clear on some concepts. First the question of “types of programming languages”. I don’t think it’s helpful to divide programming languages into “functional”, “object-oriented”, “imperative”, etc. Sure, certain languages are better suited to certain styles of programming, but it’s important to differentiate on essentials and not mere aesthetics.
Functional programming is a restriction that the programmer puts on his programs. Having a language with a compiler that will warn you if you’re breaking referential transparency is helpful, but not essential. I do functional programming in Java, for example, which most people consider an “imperative OO” language.
My followup on my above assertion:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4534
Another of my followups:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
I think it is important to note that OOP and imperative programming are orthogonal concepts (the quote of Jason in the article appears to have conflated them):
http://en.wikipedia.org/wiki/Referential_transparency_(computer_science)#Contrast_to_imperative_programming
Imperative programming is the antithesis of referential transparency, in that the former must be evaluated in a specific order. Whereas, the OOP concepts encapsulation (i.e. information hiding), inheritance, interface, and polymorphism can be referentially transparent.
There is an equivalence (not a dichotomy) between FP and OOP in terms of the "Expression Problem" (Wadler), in that it can be solved by either adding new functions or case classes respectively.
So imperative programming is the "evil" which FP should be criticizing, not OOP.
Replace "equivalence" with "duality" (the mathematics definition, not the "dichotomy" definition where dichotomy means logical opposite complement or mutually exclusive contradiction). FP and OOP are not dichotomic paradigms (rather they are orthogonal, i.e. mutually independent, and can be mixed), and they are duals in terms of their equivalent ability to solve the "Expression Problem".
http://en.wikipedia.org/wiki/Duality_(mathematics)
Note, above I am referring the exclusive notion of FP, i.e. referential transparency. There is also an inclusive notion of FP which muddles definitions and understanding (and leads to conflation of terms):
http://www.artima.com/weblogs/viewpost.jsp?thread=166742
Also, replace "or case classes" with the more general "or subtypes".
http://beust.com/weblog2/archives/000490.html (read from Brian Slesinsky's comment down)
http://lambda-the-ultimate.org/node/2927#comment-43268
The point is the one I already made. Imperative is always the antithesis of side-effects-free. Pure (referentially transparent) is never imperative and vice-versa. They are dichotomies, i.e. mutually exclusive. If I am wrong, I would appreciate an example? (see discussion of your example below)
1) Unless I have misunderstood your description of your point-of-view, it seems you are referring to the inclusive notion, which may be more clear from reading #2 below.
2) Order-of-execution dependence (i.e. imperative) is not side-effects-free, and thus is not pure FP. If your sub-expressions are pure FP, but your composition of them is imperative (sequenced in an execution order), then the dichotomic paradigms composition is not pure (even though the sub-expressions are), i.e. they do not mix without choosing one of the contradictions. If the composition of the pure FP expressions is not execution order dependent, then it is pure and not imperative. Note, I assert that it is possible for a pure function to contain imperative composition which calls only pure functions (the rules I am working on have a few more requirements), thus the contradiction shifts back to pure (this is what I mean by my work on a granular mixing), yet while the function is pure, the internals of the function remain order-dependent and impure.
Referential transparency means the function call can be replaced with its cached return value, any where in the program it was called. It thus follows that order-of-execution does not matter, except in a trivial case. In pure FP, the hierarchy of function calls dictates an order-of-execution only where there is no reuse of a function, because the compiler is free to compute and cache values of a sub-function call before executing its caller, and to execute a pure function with different sets of arguments concurrently.
Rúnar said:
"Pure and imperative are orthogonal concepts"
Disagree, they are dichotomies, i.e. mutually exclusive (dependent because they are contradictory, can not be mixed without one overriding the other). Orthogonal means mutually inclusive (independent, freely intermixed without either paradigm contradicted).
Rúnar said:
"See the referentially transparent imperative program in the post."
I assume you are referring to where you wrote in the article, "We can also do “OO” in Haskell...".
a) Your example is using a monad (with 'do' syntactical sugar to obscure the nested 'bind' calls) to abstract side effects. Although abstraction is an "OO" concept, I assume your point is that purely functional can be written in an imperative style. Let's not conflate "OO" with imperative here, as OOP is orthogonal (not mutually exclusive) to pure FP, i.e. they can be intermixed without a contradiction. Whereas, imperative programming can not be mixed with FP (i.e. not mutually inclusive) without causing some level of the composition to be impure.
b) That example is imperative and not purely functional, because it has side-effects (e.g. printing to stdout). Yes, you can force Haskell to do impure imperative code.
Whereas, for example the State monad can be used to simulate global state in a purely functional (non-imperative) paradigm, i.e. only the pure functions in the composition that access the state are imperatively composed (order-dependent), and pure functions (that do not access the state) remain order-independent, when composed with the former using 'bind'. Here is an excerpt from my work on State monad which is not yet uploaded to my site.
// When exclusively not composing impure morphism functions that operate on state (i.e. that do not return a copy of modified state),
// the state monad composition of morphism functions (using 'bind' and 'map'),
// isolates the order-dependency drawback of global state to those pure morphism functions that operate on state,
// and any composed pure morphism functions, that do not input state, remain order-independent.
// Thus the state monad is a granular composition concurrent programming paradigm.
"This is an imperative program with no side-effects."
No, it is a pure program.
That is why I asked you what your definition of "imperatives" is, because it seems your eyes are fooled by the do syntactical sugar. The types of the pure functions determines whether they are composed purely or imperatively, not the ordering of the syntactical sugar. That composition flexibility is the beauty of the State monad, as I explained already.
"You talk a lot and say little."
I am thinking you did not yet have the epiphany, so you did not absorb what I wrote. Let me try to clarify below.
"I think you’re conflating “order-of-execution” with ordinary causal sequencing. In the expression f (g x), is it important that we execute g x before applying f to it?"
No, I covered that already in my prior reply. Concurrency in the pure lambda calculus is only possible when we reuse functions more than once. The key is that order is not forced when there is reuse, we can calculate the same function concurrently or in opposite order, e.g.
f( g x )( g y ) // or for those who prefer the notation f( g( x ), g( y ) )
"The sequencing isn’t imposed by any side-effects. It’s clear when you look at the type of (>>=) :: m a -> (a -> m b) -> m b. The second action requires the a from the first action (of type m a)"
You are confusing the purity of the functions being composed with the purity of the composition. Indeed the the types of the functions determines whether they are dependent on the prior 'bind' call, but some uses of a monad, a = b. For those cases, the composition is pure and not imperative.
As I said above, do not be fooled by the do syntactical sugar, the types of the composition control whether the composition is pure or imperative. Pure and imperative are opposites. Please try to understand.
Also I forgot to say that printing to the screen (twice) is order dependent. I think you agree there is order-dependence in the example in your article.
I am in rush here and I misstated about a = b. The only relevant point is that the types of the morphism function determine if the composition is pure or imperative. I will try to follow up with a clear example later..
Haha. Thanks.
Blame God for Godel's second incompleteness theorem for the inability to declare a consistent theory of the barrier between pure and imperative.
Pureness is relative (to lack of imperative), like everything else existential. Read Hume.
Impure composition of a pure API can render the order-independence within the pure code imperative relative to the external state. If I accepted your notion of imperative, then everything is imperative, and there is no such thing as pure code. Rather I place the barrier at the border between the pure API and the impure composition, which is deterministic relative to static typing.
You place the barrier for some arbitrary reason on pure monadic composition, which orders the monad state purely, which is not conceptually different than how normal function composition orders the final return value (which could also be considered a state and composed impurely).
I think I had a mistake. As long as the State monad is pure, the imperative code is not the monadic binding of functions that access the state, but rather any impure composition of the resultant state. The IO monad is obviously impure as soon as it calls a system function.
Continuation from the last post on prior page of this thread
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4561
It was censored, but I have a copy here.
I read "Tackling the Awkward Squad" again, and it clearly says at the end of section 2 and 7, that where the IO is used, the program is imperative and impure. So I am not disagreeing with that research paper by Peyton-Jones, who is one of the principle creators of Haskell. He makes the point that IO should normally be at the top-level functions which call into a core of pure functions. And the point of the IO type is to delineate the impure, imperative portions of the code in a language that is otherwise always pure. So when these types are used, Haskell is not a 100% pure, declarative language. And that is a good thing. This is just Haskell's way of mixing pure and impure code.
The World state is not reversible in time, because it is impossible to cache the infinite states of the World for each discrete time (besides time is continuous which is another reason there are infinite states). Haskell does not allow the World in "IO a" to be reversed in time, i.e. all action functions occur in forward time order and thus the functions that input IO are imperative and no longer declarative. Thus, this restriction on order does make the action functions pure, because a pure function must be reversible, because irreversibly is a side-effect. For example, you write some output to the World, then you read some input which depends on the user having the output before making the input. Thus the output is a global side-effect in the World that impacts the input. Contrast this with functional reactive programming, which allows inputs and outputs to occur in any order, i.e. it is declarative and order-independent. Haskell can do pure functional reactive programming, and to interface a stateful world, every functional reactive program will need an impure, imperative "wrapper", which is what IO is.
See also my comments on this page, Pure Functional Language: Haskell
=========================
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4566
Ah I see he uncensored my posts above.
I want to share my impromptu thoughts about why pure FP might be considered declarative. Although there is a structure in function composition which implies a logical order-of-operations, the purity allows the order-of-execution to deviate without any impact on the final result. Structure with order-of-execution independence is one of the general characteristics of a declarative paradigm; whereas, the nature of an imperative program is that the final state depends on the order-of-execution, because the lack of purity means that function state transitions can not be isolated from each other. Seems to be a slam dunk against Rúnar's confusion of terms. Maybe he realized it too:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4570
===========================
http://kawagner.blogspot.com/2007/02/why-monads-are-evil.html?showComment=1311410012629#c7474729809428588396
Shelby wrote:
It was censored, but I have a copy here.
Definitions:
'pure' = referentially transparent
'impure' = 'imperative' = not referentially transparent = referentially opaque
Rúnar said:In my view “imperative” = “monadic”
Thus, you are claiming that 'imperative' = all pure function composition, h : (a -> b) -> (b -> c) -> a -> c; h f g x = f( g x ). Thus you are claiming that 'imperative' is not distinguishable from pure FP.
The proof for the state monad, is the morphism function m : a -> State s b; m t = \x -> g (t,x), may call a function g : (a,s) -> (b,s). The monadic binding is pure function composition on the tuple, e.g. f( g (a,x) ).
That the ordering of pure function composition can produce a final result of type c or (c,s), is irrelevant to the concept of 'imperative'.No it isn’t.
My initial succinct comment was correct, 'imperative' is the antithesis of purity-- there is no other definition that gives imperative any meaning. My followup reply correctly predicted that your definition of imperative is muddled.
My further verbose replies were an attempt to try to understand how you could distinquish some forms of ordering of pure functional composition from others. I was trying to understand and explain what significance you could apply to the abstraction of a monad. The monad abstracts (wraps) some state or computation, so that it is possible to compose functions which map unwrapped types, a -> b, with those that map wrapped types, a -> m a. I explained that for example for the state monad, the composed functions of type, a -> b, do not operate on the state, thus they certainly can't be 'imperative' relative to the state. And as I proved above, the composition of functions of type, a -> State s b, is also pure function composition returning a tuple. You try to make a distinction that monadic ordering determines the result tuple, but the ordering of any pure function composition determines the result. There is no distinction.
The bottom line is that 'imperative' is any function that is not referentially transparent. Period. No corner cases, no muddled definition that has no meaning.
In my next comment, I will explain why the IO monad is impure and thus 'imperative', even in Haskell.
The IO monadic composition in Haskell is claimed[1] to be pure because the World (as abstracted by IO a), and not an instance of type a, is an input and output for each action function, i.e. it is assumed that for a given state of the potentially infinite state-machine in the World, the same World state will always be returned[2]. The problem here is that the World state includes time, and thus it never has the same state. This is what I was referring to when I mentioned Coase's and Godel's theorems. The lambda calculus requires the function to be beta-reducible[3], but the structure of the World is never considered by the Haskell type checker[4][5]. Thus this is a hack to claim[1] that side-effects don't matter to purity, by assuming that the World input type of the of IO action function is computable, but never having the compiler check its infinite structure. In short, the World's structure is uncomputable, which relates to undecidability and the Halting theorem.
So the IO monad is actually impure (i.e. 'imperative') and Haskell is lying about it[5]. There is analogy to virtual methods, which at compile-time comply with the variance of Liskov Substitution Principle for the method parameters and return types, but fail at run-time the pre- and post-condition behavioral semantics of LSP (which is unchecked by most languages). Scala makes it easier to lie about purity, because its type system does not even check for purity, so any kernel function that interacts with the world, is impure.
For a language that allows the declaration of a function's type to include whether it is pure or not (e.g. Copute), as I previously stated, the barrier between 'imperative' (i.e. 'impure') and 'pure' is the determined by the type of the function (the API).
[1] - [5] The links for the footnotes follow in the next reply.
[1] Section 7, Tackling the Awkward Squad, Peyton-Jones, "this is an important caveat...I have not proved it"
[2] Section 2.2, Tackling the Awkward Squad, Peyton-Jones
[3] http://en.wikipedia.org/w/index.php?title=Lambda_calculus&oldid=433678424#Computable_functions_and_lambda_calculus
[4] http://www.reddit.com/r/haskell/comments/8bhir/why_the_io_monad_isnt_a_dirty_hack/c08s5bc
[5] http://kawagner.blogspot.com/2007/02/why-monads-are-evil.html?showComment=1170781260000#c2061819858844484763
This is my final post to this blog page. I hope this helps you see the light.
I read "Tackling the Awkward Squad" again, and it clearly says at the end of section 2 and 7, that where the IO is used, the program is imperative and impure. So I am not disagreeing with that research paper by Peyton-Jones, who is one of the principle creators of Haskell. He makes the point that IO should normally be at the top-level functions which call into a core of pure functions. And the point of the IO type is to delineate the impure, imperative portions of the code in a language that is otherwise always pure. So when these types are used, Haskell is not a 100% pure, declarative language. And that is a good thing. This is just Haskell's way of mixing pure and impure code.
The World state is not reversible in time, because it is impossible to cache the infinite states of the World for each discrete time (besides time is continuous which is another reason there are infinite states). Haskell does not allow the World in "IO a" to be reversed in time, i.e. all action functions occur in forward time order and thus the functions that input IO are imperative and no longer declarative. Thus, this restriction on order does make the action functions pure, because a pure function must be reversible, because irreversibly is a side-effect. For example, you write some output to the World, then you read some input which depends on the user having the output before making the input. Thus the output is a global side-effect in the World that impacts the input. Contrast this with functional reactive programming, which allows inputs and outputs to occur in any order, i.e. it is declarative and order-independent. Haskell can do pure functional reactive programming, and to interface a stateful world, every functional reactive program will need an impure, imperative "wrapper", which is what IO is.
See also my comments on this page, Pure Functional Language: Haskell
=========================
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4566
Okay since you censored my last 2 posts, then I will send you this message via your blog, since I can not seem to find your contact email address any where.
Rúnar wrote:
"Imperative programming is just Kleisli composition"
John Hughes apparently says you are wrong. I assume you know he helped create Haskell, and is very prominent in the field of FP:
http://www.cse.chalmers.se/~rjmh/afp-arrows.pdf
Section 1.3 Arrows as Computations, Programming with Arrows
Hughes wrote:
"monads..., help to structure purely functional code, with not an imperative operation in sight"
Note I am publishing all of the censored comments to the web, including those censored by Tony Morris over at his blog. And I am aware of your remark at Twitter:
http://twitter.com/#!/runarorama/status/84026046692851712
"Feeding a known troll: http://t.co/2ezUkL8 "
Enjoy the bliss.
Ah I see he uncensored my posts above.
Rúnar, thank you for uncensoring my 2 posts.
Let me try to close our discussion amicably.The noise-to-signal ratio in your comments exceeds my tolerance.
You added that. My verbose middle comments contained some errors (perhaps due to exhaustion made after midnight at end of 18 hour work day in an Asian internet cafe with multiple patrons simultaneously loudly singing different songs for hours, also fighting an amoeba taking flagyl, and I have 1 eye). The last 4 comments were written after I got some limited sleep (woken by a noisy parade).
I just realized why we were talking past each other.
Imperative means non-declarative, e.g. no "control flow". The ambiguity is in the definition of "control flow". A pure function creates a state change by creating new state, and not changing the input state. Thus, apparently some claim that pure FP is declarative, thus impurity is (a subset of) imperative.
I could accept that definition. Alternatively, I already implied early in our discussion that I could agree that all non-declarative programming is imperative.
Rúnar, are you saying that only Kleisli composition is imperative? Or, are we in agreement that monadic composition generalizes to normal pure function composition, and thus all pure function composition would be imperative, where you said "state transition" (i.e. state creation) is imperative?
Pure function composition is ordered and causes state creation ("transition") whether it be in a monad, Kleisli, or otherwise. Your apparent (?) distinction on monadic and Kleisli lead me to believe that you were saying only some FP composition is imperative.
For novice readers, the Kleisli arrow is the function, a -> m b. The arrow "->" is the general function. A function is either impure or pure, thus either changes or creates state respectively. Some forms of declarative programming (e.g. HTML, CSS, etc) is stateless, and (if I remember correctly) this is possible because they are not Turing complete.
I want to share my impromptu thoughts about why pure FP might be considered declarative. Although there is a structure in function composition which implies a logical order-of-operations, the purity allows the order-of-execution to deviate without any impact on the final result. Structure with order-of-execution independence is one of the general characteristics of a declarative paradigm; whereas, the nature of an imperative program is that the final state depends on the order-of-execution, because the lack of purity means that function state transitions can not be isolated from each other. Seems to be a slam dunk against Rúnar's confusion of terms. Maybe he realized it too:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4570
===========================
http://kawagner.blogspot.com/2007/02/why-monads-are-evil.html?showComment=1311410012629#c7474729809428588396
Shelby wrote:
I have refined my understand, which is explained at the following link:
https://goldwetrust.forumotion.com/t112p180-computers#4434
As Peyton-Jones elucidates in "Tackling the Awkward Squad", IO Monad is simply (one of) Haskell's way(s) of mixing impure, imperative code with pure, declarative code. To the extent that you can make your interface to the world declarative, i.e. pure functional (reactive), then you don't use IO Monad. But at least until the world (i.e. the OS) becomes pure functional, then we need a "wrapper" on our pure functional core-- which is IO Monad.
I gained clarity in the summary of my current open source project, which goes into more detail on this issue:
http://Copute.com
Last edited by Shelby on Fri Dec 09, 2011 10:02 pm; edited 11 times in total
Software engineering is a living organism (why we need Copute!)
http://elegantcode.com/2011/06/22/why-software-development-will-never-be-engineering/
The analogy was to why online poker skills accelerated in past 5 years, was due to more interaction and human involvement, i.e. lower the barriers to COOPeration (note I also own COOPute.com). This is exactly what I am trying to do for software evolution by creating Copute, with its economic and technical models.
I replied to a comment:
http://elegantcode.com/2011/06/22/why-software-development-will-never-be-engineering/#comment-234084180
http://esr.ibiblio.org/?p=3345#comment-311334
Exactly why I am creating Copute. The maximum division-of-labor and thus specialization is only possible if code is more reusable and smaller orthogonal modules, so that programmers can specialize and have their efforts reused in different problem domains.
P.S. Daniel's Biblical "travel & knowledge will increase" pops up again, in this online poker (increasing travel virtually) to knowledge progress analogy.
The nature of software development, [...], leads itself to rapid evolution.
Consider what direction software development is evolving. Is it even evolving in the direction of rigid engineering practices or is it evolving in the exact OPPOSITE direction?
Ten years ago, we tried to use UML diagrams and CASE tools to develop software. Ten years ago waterfall was all the rage. Ten years ago, we thought that ten years in the future we would have programs that would allow us to build software in the same way that CAD tools allow for building machine parts.
Not only did it not happen. It went completely the other way. Now we are all talking about Agile. Now people don’t even remember what CASE tools are. Now we are building software without trying to define the whole system in UML diagrams.
The fact of the matter is software systems are unruly beasts!
The analogy was to why online poker skills accelerated in past 5 years, was due to more interaction and human involvement, i.e. lower the barriers to COOPeration (note I also own COOPute.com). This is exactly what I am trying to do for software evolution by creating Copute, with its economic and technical models.
I replied to a comment:
http://elegantcode.com/2011/06/22/why-software-development-will-never-be-engineering/#comment-234084180
The difference between software and the brigde builder is that you can calculate if a bridge is stable and can support the weight and withstand the wind... I never could prove a piece of software to be correct with mathmatics. Just by using it and see if it didn't cause errors.
There are formal semantics for software in some languages, but your point is valid in the sense that the number of variables that have to be fit in many software, are greater and more importantly they are changing more rapidly. So it is not that we don't have equational foundations increasingly used in practical software engineering at least by those on leading edge of software engineering (e.g. category theory, monads, programming paradigms, agile practices, agile enforcing compilers, formal logic, pure lambda calculus and the referentially transparency, behavioral static typing, etc), it is that the dynamic complexity of variables in software is insatiable, because the better our software, then the more variables of demand are created by the new possibilities revealed. Software is unique from other engineering disciplines in that it is applicable to all of them.
Software is the encoding of human knowledge, which is always progressing.
There are some important equational foundations that we need to popularize so that competitive cooperation on software can accelerate.
http://esr.ibiblio.org/?p=3345#comment-311334
There’s something that many commenters have alluded to, but didn’t come out directly and say:
An engineer will specialize and become expert in some field, like bridge-building. The days when an engineer like Brunel could design bridges and ships and… are long gone.
Software developers know how to develop software, but they come to a big project with no knowlege of the problem domain. They have to learn that on-the-fly, gaining experience by making mistakes. Still worse, the people who commissioned the project in the first place don’t know what they really want, and find problems communicating with the software people. This is much harder than telling an engineering firm that you need a bridge from here to there, to carry so much traffic, and they provide a solution.
OTOH, the above is not all bad. Software’s inherent flexibility allows the business to explore solutions that they might otherwise not be aware of (if they have the nerve to keep funding the project). Trying out a suspension bridge, then a cantilever, then a simple truss design by building them all wouldn’t fly in the engineering world; the engineer would be expected to do that on paper and propose the best one.
Exactly why I am creating Copute. The maximum division-of-labor and thus specialization is only possible if code is more reusable and smaller orthogonal modules, so that programmers can specialize and have their efforts reused in different problem domains.
P.S. Daniel's Biblical "travel & knowledge will increase" pops up again, in this online poker (increasing travel virtually) to knowledge progress analogy.
Working on a new Introduction to Copute (also technical rationale)
Copute is a new computer language which aims to correct the mistakes and shortcomings in other programming languages, which have limited the scale at which software can be built from reusable modules.
Fitness
Degrees-of-freedom (DOF) dictates how well a system can adapt to cooperate and fit to a desired solution. For example, there would be gaps (i.e. errors in fitness) between a bicycle chain and the curved shape it wraps around, because the chain can only bend at the links. Employing instead a solid, but flexible metal strip, the metal would remain fit to the curve only with a sustained force-- the resisting force being a reduced DOF and an error in fitness. Permanent bending reduces DOF for wrapping to different curves.
Google can't even find a wise proverb, "if you use your power, then you lose it". The equivalent form is probably popular because it is misunderstood, "absolute power corrupts absolutely". The utility of power exists only because there are resistance forces. This generative fact applies to everything. For example, the power vacuum that gives rise to central authority is due to the vested interests, bickering, selfishness, and complacency (laziness) of the governed.
The proof is given by the well established fundamental physics equations. Power is the rate at which work can be completed. Power requires energy to produce work, but the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter.[1] For example, a compressed spring wants to push back and undo the work.
So if our goal is to get more configurations of in our system with less work, then we need to reduce these resistance forces, i.e. increase the DOF. Think of springs opposing movement in numerous directions, and we want to remove them. Thus, if we succeed to increase DOF, it takes less work to produce our desired diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from our work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what we want and faster, with a greater longevity, is thus more powerful (efficient) and explains the wisdom of the forgotten proverb.
Knowledge
Communication redundance (amplitude) is a form of power, because its utility exists due to the resistance to comprehension, i.e. due to noise mixed with the signal. The signal-to-noise ratio (SNR) depends on the DOF of both the sender and the receiver.
The difference between signal and noise, is the mutual comprehension (i.e. resonance) between the sender and the receiver, i.e. noise can become a signal or vice versa, depending on the coupling. In physics, resonance is the lack of resistance to the change in a particular configuration of the subject matter, i.e. each resonator is a DOF.[2] DOF is the number of potential orthogonal (independent) configurations, i.e. the ability to obtain a configuration without impacting the ability to obtain another configuration. In short, DOF are the configurations that don't have dependencies on each other.
Thus increasing the number of independent configurations in any system, makes the system more powerful, requiring less energy, work, and power, to obtain diversity within the system. The second law of thermodynamics says that the universe is trending to maximum entropy, i.e. the maximum independent configurations. Entropy (disorder) is a measure of the relative number of independent possibilities. So now we see why Coase's theorem holds, that the universe is trending towards the maximum free market, and any cost barrier will fail eventually. This is why small things grow faster, because large things reduce the number of independent configurations and thus require exponentially more power to grow. Thus in the point-of-view of change and the future, small things are exponentially more powerful than large things. The bell curve exists because the minority is exponentially more efficient (more DOF), and because a perfectly equal distribution would require infinite DOF and the end of the universe's trend to maximum disorder. Large things stagnate, rot, collapse, and disappear.
Knowledge is correlated to the DOF, because in every definition of knowledge one can think of, an increase in knowledge is an increase in DOF and vice versa. Software is unique among the engineering disciplines in that it is applicable to all of them. Software is the process of increasing knowledge. Thus one of the most critical characteristics of software is that it does not want to be static, and that the larger the components, thus the fewer the DOF, the less powerful (efficient) the software process.
Copute attempts to apply these concepts to software, by increasing the independence (orthogonality) of software components, so they can be composed into diverse configurations with less work. Independent programmers can leverage independent components with less communication and refactoring work, thus Copute is about increasing cooperation.
Below we will outline the major features of Copute which enable independent software components.
Pure Functional Programming
Pure functional programming (PFP) is fundamentally about not repeating oneself, i.e. the ability to compose (small) functions more freely, because it has following features:
PFP is never imperative, and is always declarative. In general, declarative languages (e.g. HTML, CSS) express the logic of the system, without forcing a particular order-of-execution. Declarative is one of the features required for concurrency. Due to purity, each the state transition for each function in PFP is independent (orthogonal) to all the others and to the order-of-execution, thus do not inhibit DOF as imperative programming does. Some people mistakenly state that PFP is imperative, but these are cases where a particular language compiler fudges (e.g. IO monad in Haskell[3]) or does not check the purity of a construct, to give apparency of purity within the semantics of the type system of the language, but is imperative in the semantics of state external to that type system.
PFP is mutually inclusive to OOP, not mutually exclusive. PFP and OOP are duals in that they each can be used to solve the Expression Problem for maximum DOF.[4] And category theory reveals that PFP and OOP are composable as follows.
Although the DOF benefits derive from math, the composition benefits of PFP are intuitive because every programmer understands how to do composition of functions. The (category theory models of) higher-level object subtypes (OOP), can be composed with ordinary functions-- functions which input type T and return type A-- without having to write specialized versions of these functions for each possible subtype we create. The following OOP models are simply composing as if they are (and thus resonating with) ordinary functions. Read the Copute documentation for each of the following models to aid in understanding the following table.
Object Oriented Programming
Copute builds on several critical Scala innovations, such as higher-kinded generics, generic type parameter variance annotations, multiple inheritance of interface and implementation, and pattern matching with abstract data types (case classes). Copute adds a few critical refinements to the single-point-of-truth (SPOT), which drastically improves the DOF, conciseness, and intuitive understanding.
TODO: discuss single-point-of-truth. Also discuss application of granular purity to OOP.
[1] http://en.wikipedia.org/w/index.php?title=Energy&oldid=435292864
[2] http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Resonators
http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Mechanical_and_acoustic_resonance
[3] https://goldwetrust.forumotion.com/t112p180-computers#4434
[4] http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4537
Fitness
Degrees-of-freedom (DOF) dictates how well a system can adapt to cooperate and fit to a desired solution. For example, there would be gaps (i.e. errors in fitness) between a bicycle chain and the curved shape it wraps around, because the chain can only bend at the links. Employing instead a solid, but flexible metal strip, the metal would remain fit to the curve only with a sustained force-- the resisting force being a reduced DOF and an error in fitness. Permanent bending reduces DOF for wrapping to different curves.
Google can't even find a wise proverb, "if you use your power, then you lose it". The equivalent form is probably popular because it is misunderstood, "absolute power corrupts absolutely". The utility of power exists only because there are resistance forces. This generative fact applies to everything. For example, the power vacuum that gives rise to central authority is due to the vested interests, bickering, selfishness, and complacency (laziness) of the governed.
The proof is given by the well established fundamental physics equations. Power is the rate at which work can be completed. Power requires energy to produce work, but the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter.[1] For example, a compressed spring wants to push back and undo the work.
So if our goal is to get more configurations of in our system with less work, then we need to reduce these resistance forces, i.e. increase the DOF. Think of springs opposing movement in numerous directions, and we want to remove them. Thus, if we succeed to increase DOF, it takes less work to produce our desired diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from our work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what we want and faster, with a greater longevity, is thus more powerful (efficient) and explains the wisdom of the forgotten proverb.
Knowledge
Communication redundance (amplitude) is a form of power, because its utility exists due to the resistance to comprehension, i.e. due to noise mixed with the signal. The signal-to-noise ratio (SNR) depends on the DOF of both the sender and the receiver.
The difference between signal and noise, is the mutual comprehension (i.e. resonance) between the sender and the receiver, i.e. noise can become a signal or vice versa, depending on the coupling. In physics, resonance is the lack of resistance to the change in a particular configuration of the subject matter, i.e. each resonator is a DOF.[2] DOF is the number of potential orthogonal (independent) configurations, i.e. the ability to obtain a configuration without impacting the ability to obtain another configuration. In short, DOF are the configurations that don't have dependencies on each other.
Thus increasing the number of independent configurations in any system, makes the system more powerful, requiring less energy, work, and power, to obtain diversity within the system. The second law of thermodynamics says that the universe is trending to maximum entropy, i.e. the maximum independent configurations. Entropy (disorder) is a measure of the relative number of independent possibilities. So now we see why Coase's theorem holds, that the universe is trending towards the maximum free market, and any cost barrier will fail eventually. This is why small things grow faster, because large things reduce the number of independent configurations and thus require exponentially more power to grow. Thus in the point-of-view of change and the future, small things are exponentially more powerful than large things. The bell curve exists because the minority is exponentially more efficient (more DOF), and because a perfectly equal distribution would require infinite DOF and the end of the universe's trend to maximum disorder. Large things stagnate, rot, collapse, and disappear.
Knowledge is correlated to the DOF, because in every definition of knowledge one can think of, an increase in knowledge is an increase in DOF and vice versa. Software is unique among the engineering disciplines in that it is applicable to all of them. Software is the process of increasing knowledge. Thus one of the most critical characteristics of software is that it does not want to be static, and that the larger the components, thus the fewer the DOF, the less powerful (efficient) the software process.
Copute attempts to apply these concepts to software, by increasing the independence (orthogonality) of software components, so they can be composed into diverse configurations with less work. Independent programmers can leverage independent components with less communication and refactoring work, thus Copute is about increasing cooperation.
Below we will outline the major features of Copute which enable independent software components.
Pure Functional Programming
Pure functional programming (PFP) is fundamentally about not repeating oneself, i.e. the ability to compose (small) functions more freely, because it has following features:
- Pure: for each value for the input, a function can be replaced by a previously cached value of the result, i.e. the function doesn't depend on any external state.
- Higher-order: a function can be a value.
- Curried: a new function can can be created by assigning a constant value to an input of an existing function.
- Declarative: function composition expresses logical order-of-operations, but due to the replacement principle for purity, the order-of-execution is independent.
- Mathematical: lambda calculus is PFP, which allows us to use category theory to maximize DOF.
PFP is never imperative, and is always declarative. In general, declarative languages (e.g. HTML, CSS) express the logic of the system, without forcing a particular order-of-execution. Declarative is one of the features required for concurrency. Due to purity, each the state transition for each function in PFP is independent (orthogonal) to all the others and to the order-of-execution, thus do not inhibit DOF as imperative programming does. Some people mistakenly state that PFP is imperative, but these are cases where a particular language compiler fudges (e.g. IO monad in Haskell[3]) or does not check the purity of a construct, to give apparency of purity within the semantics of the type system of the language, but is imperative in the semantics of state external to that type system.
PFP is mutually inclusive to OOP, not mutually exclusive. PFP and OOP are duals in that they each can be used to solve the Expression Problem for maximum DOF.[4] And category theory reveals that PFP and OOP are composable as follows.
Although the DOF benefits derive from math, the composition benefits of PFP are intuitive because every programmer understands how to do composition of functions. The (category theory models of) higher-level object subtypes (OOP), can be composed with ordinary functions-- functions which input type T and return type A-- without having to write specialized versions of these functions for each possible subtype we create. The following OOP models are simply composing as if they are (and thus resonating with) ordinary functions. Read the Copute documentation for each of the following models to aid in understanding the following table.
Object Oriented Programming
Copute builds on several critical Scala innovations, such as higher-kinded generics, generic type parameter variance annotations, multiple inheritance of interface and implementation, and pattern matching with abstract data types (case classes). Copute adds a few critical refinements to the single-point-of-truth (SPOT), which drastically improves the DOF, conciseness, and intuitive understanding.
- Purity
- Unreferencable implementation
- No virtual re-implementation
- Static methods instead of implicits
- Disobscurity: e.g. one way to write a function instead of 6, no methods as operators, no.
- No exceptions
TODO: discuss single-point-of-truth. Also discuss application of granular purity to OOP.
[1] http://en.wikipedia.org/w/index.php?title=Energy&oldid=435292864
Stored energy is created whenever a particle has been moved through a field it interacts with (requiring a force to do so), but the energy to accomplish this is stored as a new position of the particles in the field—a configuration that must be "held" or fixed by a different type of force (otherwise, the new configuration would resolve itself by the field pushing or pulling the particle back toward its previous position). This type of energy "stored" by force-fields and particles that have been forced into a new physical configuration in the field by doing work on them by another system, is referred to as potential energy. A simple example of potential energy is the work needed to lift an object in a gravity field, up to a support.
[2] http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Resonators
A physical system can have as many resonant frequencies as it has degrees of freedom.
http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Mechanical_and_acoustic_resonance
Mechanical resonance is the tendency of a mechanical system to absorb more energy [i.e. less resistance] when the frequency of its oscillations [i.e. change in configuration] matches the system's natural frequency of vibration [i.e. natural change in configuration] than it does at other frequencies.
[3] https://goldwetrust.forumotion.com/t112p180-computers#4434
[4] http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4537
Last edited by Shelby on Mon Jul 25, 2011 10:07 am; edited 4 times in total
Efficiency of purely functional programming
http://twitter.com/#!/kmett/status/74517454973444096
http://stackoverflow.com/questions/1990464/efficiency-of-purely-functional-programming
Array access explains the logarithmic cost factor:
http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/1268285#1268285
I'm becoming skeptical about the claim that pure functional is generally log n slower:
https://goldwetrust.forumotion.com/t112p195-computers#4537
Discussion of tradeoffs for lazy programming:
https://goldwetrust.forumotion.com/t112p165-computers#4430
Going immutable in a strict language can cost you a logarithmic factor, but you get free persistence http://bit.ly/6T6lfr
http://stackoverflow.com/questions/1990464/efficiency-of-purely-functional-programming
Array access explains the logarithmic cost factor:
http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/1268285#1268285
I'm becoming skeptical about the claim that pure functional is generally log n slower:
https://goldwetrust.forumotion.com/t112p195-computers#4537
Discussion of tradeoffs for lazy programming:
https://goldwetrust.forumotion.com/t112p165-computers#4430
Last edited by Shelby on Thu Aug 25, 2011 5:58 pm; edited 3 times in total
BitCoin is socialist and not (and no online curency can be) anonymous
http://www.bitcoin.org/
The fundamental problem is that any online currency either requires a centralized trust authority which is subject to govt regulation, or a majority vote of decentralized authorities which is just socialism and the antithesis of what makes gold and silver money (he who holds the gold makes the rules, no third party authority or majority vote). I had come to this conclusion a long time ago in the Precious Metals forum during my analysis of whether an online currency (potentially backed by gold and silver) could survive govt retribution.
BitCoin can be defeated by a hacker who puts a virus on a majority of computers using BitCoin, or a govt legislation that requires every computer sold to run a firmware program that defeats it (even if a minority will remove the firmware illicitly):
http://forum.bitcoin.org/index.php?topic=12435.msg172811#msg172811
Thus the BitCoin algorithm has an attack vector, which encourages govt regulation of our computers.
No anonymity on the internet:
http://en.wikipedia.org/wiki/Bitcoin#Anonymity
============================
I should also be frank with my friends.
Spoken like a good collectivist:
I suppose you didn't realize that you were also outlawing the 2nd law of thermodynamics too. But I am not going to explain that one to you, because you will NEVER get it.
Btw, Bitcoin is vulnerable to attack by a government law which says every computer must contain a chip which the government can use to hijack the computer for purposes of regulating peer-to-peer protocols. This is because for money to gain enough supply to be viable, the protocol has to be fairly well known and static.
The fundamental problem is that any online currency either requires a centralized trust authority which is subject to govt regulation, or a majority vote of decentralized authorities which is just socialism and the antithesis of what makes gold and silver money (he who holds the gold makes the rules, no third party authority or majority vote). I had come to this conclusion a long time ago in the Precious Metals forum during my analysis of whether an online currency (potentially backed by gold and silver) could survive govt retribution.
BitCoin can be defeated by a hacker who puts a virus on a majority of computers using BitCoin, or a govt legislation that requires every computer sold to run a firmware program that defeats it (even if a minority will remove the firmware illicitly):
http://forum.bitcoin.org/index.php?topic=12435.msg172811#msg172811
Thus the BitCoin algorithm has an attack vector, which encourages govt regulation of our computers.
No anonymity on the internet:
http://en.wikipedia.org/wiki/Bitcoin#Anonymity
============================
I should also be frank with my friends.
...at FinancialSense.com. The topic of Bitcoin, helped me find direction for it.
Spoken like a good collectivist:
Government debt should be settled, and outlawed going forward. A balanced budget amendment to the Constitution should be adopted.
I suppose you didn't realize that you were also outlawing the 2nd law of thermodynamics too. But I am not going to explain that one to you, because you will NEVER get it.
Btw, Bitcoin is vulnerable to attack by a government law which says every computer must contain a chip which the government can use to hijack the computer for purposes of regulating peer-to-peer protocols. This is because for money to gain enough supply to be viable, the protocol has to be fairly well known and static.
Last edited by Shelby on Tue Sep 20, 2011 11:15 pm; edited 2 times in total
Your neighborly Google, whose slogan is "Do no evil"
Your neighborly Google, whose slogan is "Do no evil":
http://www.stuff.co.nz/technology/digital-living/5217945/Google-Wi-Fi-snooping-lawsuits-can-proceed
http://www.stuff.co.nz/technology/digital-living/5217945/Google-Wi-Fi-snooping-lawsuits-can-proceed
Computers create heat and no potential energy
Note this applies to the hardware perspective, while in the software dimension, there is potential energy created (which is why software decays over time relative to the changing applications).
http://www.fibertown.com/pdf/uptimeinstituteheatdensitytrends.pdf
So this means that the work being done is not creating potential energy (sustained resistance forces), but rather permanently bending the structures of resistance, thus releasing heat (heat is disordered matter, that is not useful, i.e. not "signal"). See the quotes of what I had previously about DOF. Remember that "work" is noise, and the information transmitted (resonated between sender & receiver) is the "signal". We want to eliminate "work" and increase information.
In a computer, these bending resistance forces are stored due to capacitance, i.e. the ability to store a charge of electrons with an insulator, and the fact that a transistor is a variable resistor (insulator) controlled by a charge applied to it. These resistance forces are reduced by making the transistors smaller, which is what causes Moore's Law (we can double the # of transistors on a silicon die every 18 months).
My previous treatise on physics and information:
https://goldwetrust.forumotion.com/t112p180-computers#4436
http://www.fibertown.com/pdf/uptimeinstituteheatdensitytrends.pdf
Power consumed by computer and communication products is totally converted to heat.
So this means that the work being done is not creating potential energy (sustained resistance forces), but rather permanently bending the structures of resistance, thus releasing heat (heat is disordered matter, that is not useful, i.e. not "signal"). See the quotes of what I had previously about DOF. Remember that "work" is noise, and the information transmitted (resonated between sender & receiver) is the "signal". We want to eliminate "work" and increase information.
In a computer, these bending resistance forces are stored due to capacitance, i.e. the ability to store a charge of electrons with an insulator, and the fact that a transistor is a variable resistor (insulator) controlled by a charge applied to it. These resistance forces are reduced by making the transistors smaller, which is what causes Moore's Law (we can double the # of transistors on a silicon die every 18 months).
My previous treatise on physics and information:
https://goldwetrust.forumotion.com/t112p180-computers#4436
Shelby wrote:[...]
Employing instead a solid, but flexible metal strip, the metal would remain fit to the curve only with a sustained force-- the resisting force being a reduced DOF and an error in fitness. Permanent bending reduces DOF for wrapping to different curve
[...]
The proof is given by the well established fundamental physics equations. Power is the rate at which work can be completed. Power requires energy to produce work, but the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter.[1] For example, a compressed spring wants to push back and undo the work.
So if our goal is to get more configurations of in our system with less work, then we need to reduce these resistance forces, i.e. increase the DOF. Think of springs opposing movement in numerous directions, and we want to remove them. Thus, if we succeed to increase DOF, it takes less work to produce our desired diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from our work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what we want and faster, with a greater longevity, is thus more powerful (efficient) [...]
Versioning, forking, bug fixes, etc... in Copute.com
THIS IS NO LONGER THE MODEL.
Please note that this is a thought experiment and it may not be implemented for Copute. There is ongoing thinking on how to obtain the best free market pricing model. Preferably it should be entirely decentralized.
Some private writings solved most of the issues in the Copute.com economic model. The remaining issue is modifications to existing modules.
Summary of Economic Model
First a brief summary of prior two posts. Copute will be a new open source (free) computer language that fosters reusability of modules of programming (to finer granularity of module size than with other languages). Any one can use Copute for free, even modify it, and there is no requirement to use the Copute.com marketplace when using the Copute language. In addition, Copute.com will offer a non-exclusive (same modules can be offered else where, even offered for free) marketplace for programmers to offer their modules to other developers, so that duplication of programming effort (modules) can be reduced. Any derivative software product created from these Copute.com modules will pay the greater of 1% of gross sales, or 10% of salaries[1] to develop, that derivative software product. Copute.com will distribute these royalties to module owners (after deducting Copute.com's fee, perhaps 1% of the royalties) proportionally based on an automated statistically sampled profiling of the derivative software in use to determine the percent of time the CPU spends in each module.
Copute.com won't have a monopoly, because any other site can offer a similar marketplace for Copute modules (or even modules of any computer language, although the Copute language will be best for reusing modules). This marketplace is to make modules fungible and the importance of this is discussed here. Note even though modules might be offered for free else where, if the derivative software uses even one module from Copute.com that is not free else where, then they will pay the same royalty to Copute.com. Thus the freedom of offering non-exclusivity, does not render Copute.com's business model impotent. In fact, it adds inertia and of course some module owners will not want to offer their modules for free else where (they might offer them else where not for free, which again would not hurt Copute.com's business model, but rather add inertia to it).
Module Improvement Ownership
But who owns the module when someone other than the original owner improves it (bug fix, refinement, etc)? I can't think of a metric of value for such improvements to modules. The best idea I have so far is to have a very flexible versioning scheme, give minimally 1% of "remaining value" for each fix, and give the option for existing owners of a module to award an increase to that minimum. Let me explain.
A module is created and offered by the owner(s) on Copute.com. This is open source, meaning anyone could offer an improvement. So how do we assign the ownership percentage of the improved version of the module? On the first improvement, the original owner has 100% share of ownership, so we minimally and automatically award 1% of that to the submitter of the improved version. The original owner has the option of login to his account on Copute.com and increase that minimum award (only the existing owners will best have the knowledge to evaluate an improvement and decide its value). So then there are two owners, minimally the split is 99% and 1%. Then a second improvement is made, so minimally 1% of 99% is automatically awarded, which is 0.99%. And this 1% is split proportionally so the first owner has 98.02%, second has 0.99%, and third has 0.99%. Either of the two prior owners can optionally award more, giving up some of their percent. Eventually for example, it may reach where original owner has say 50%, and each improvement owner has (minimally) 0.5%. The point is that future improvements slow down the erosion of prior ownership, unless the prior owners choose to award higher value. This is desirable because of the law of diminishing returns (80/20 rule).
Note that projects (e.g. Linux, Firefox, etc) tend to have 1000s of bug fixes and improvements, thus the original creators would be diluted to near 0% on a project-scale granularity if this model was applied, but within a project, the modules typically only require dozens of fixes/improvements, so this model encourages original owners to create smaller (more granular, thus more) reusable modules. So we see the benefit of smaller things and reusability scales economically well; whereas, project-level granularity does not (can not be fungible).
Also note that the above concept is economic ownership, i.e. the percentage of royalties, not control. If a module is released on Copute.com marketplace, the control is relinquished to the terms of the license which allows unlimited modifications and forks but requires them to reside on the Copute.com marketplace, not even Copute.com has any centralized decision control. The original owner of the module might release the original module else where with different licensing terms (Copute.com marketplace is non-exclusive), but that owner can not release the improvements else where and neither can the submitter of the improvement release the improvement else where (unless making the improvement to a identical fork in a different marketplace of the same module). So we see that Copute.com's forks potentially become unique from the forks in other marketplaces, unless all submitters of improvements submit to all marketplaces.
Versioning and Forking
Copute language, and thus the complementary (and orthogonal) marketplace on Copute.com, is all about freedom-of-choice. Thus we don't want to limit in any way. So submitters of improvements should be free to omit selective prior improvements, i.e. to create a new fork in the version trail of the module. I have an idea for a version numbering scheme that accommodates this with explicit enumeration of omissions. A version number will be numbers separated by periods, e.g. "1.0.1", where each period identifies a fork. So the first version of a module is simply "1". If an improvement is made to that module, then the version becomes "2", the next "3", etc.. If an improvement omits improvement "2" and keeps "3", then the version number is "1:3.1". If an improvement omits improvement "3" forward, then the version number is "2.1". If an improvement omits improvement "2" and keeps "3" through "5", then the version number is "1:3-5.1". If an improvement omits improvement "3" and keeps "4" through "5", then the version number is "2:4-5.1", which is equivalent shorthand for "1-2:4-5.1". The ".1" becomes a new fork and then it can also be forked, so version numbers can end up with multiple periods.
Note that when a submitter proposes to create an improvement without forking, this will initially be created as a fork, then it must be accepted by one of the owners of the trunk, in order to become a numbered as a trunk improvement instead of a fork. This is necessary so that owners don't have to create forks to indicate they don't approve of an improvement.
Note there is no automated way to precisely identify that a particular improvement actually includes and omits the improvements it says it does. Thus it is allowed to have two forks of the same of fork, and this is done using a "..n.1", e.g. "1:3..1.1" and "1:3..2.1" would be an alternative forks of "1:3.1".
Note this proposed module version numbering scheme bears almost no correlation of semantics to existing project version numbering schemes that also employ periods.
So which version will people use if there are multiple forks (which tip of which branch of a metaphorical tree)? They will use which ever fork they think best fits their needs. The popularity of use of forks in derivative software, will be a vote on the fitness, i.e. the free market will anneal the answer. The module owner(s) may endorse a particular fork by making improvements in it. This is superior to the current open source model where forking is highly discouraged and the project owners (current languages don't support module fine granularity of reuse) are (hopefully "benevolent") dictators "for life" (Linus Torvalds and others have been referred to with the quoted title).
[1] Salaries are those of the company developing the derivative software product, not any salaries that were paid to develop the modules that are offered on Copute.com.
Please note that this is a thought experiment and it may not be implemented for Copute. There is ongoing thinking on how to obtain the best free market pricing model. Preferably it should be entirely decentralized.
Some private writings solved most of the issues in the Copute.com economic model. The remaining issue is modifications to existing modules.
Summary of Economic Model
First a brief summary of prior two posts. Copute will be a new open source (free) computer language that fosters reusability of modules of programming (to finer granularity of module size than with other languages). Any one can use Copute for free, even modify it, and there is no requirement to use the Copute.com marketplace when using the Copute language. In addition, Copute.com will offer a non-exclusive (same modules can be offered else where, even offered for free) marketplace for programmers to offer their modules to other developers, so that duplication of programming effort (modules) can be reduced. Any derivative software product created from these Copute.com modules will pay the greater of 1% of gross sales, or 10% of salaries[1] to develop, that derivative software product. Copute.com will distribute these royalties to module owners (after deducting Copute.com's fee, perhaps 1% of the royalties) proportionally based on an automated statistically sampled profiling of the derivative software in use to determine the percent of time the CPU spends in each module.
Copute.com won't have a monopoly, because any other site can offer a similar marketplace for Copute modules (or even modules of any computer language, although the Copute language will be best for reusing modules). This marketplace is to make modules fungible and the importance of this is discussed here. Note even though modules might be offered for free else where, if the derivative software uses even one module from Copute.com that is not free else where, then they will pay the same royalty to Copute.com. Thus the freedom of offering non-exclusivity, does not render Copute.com's business model impotent. In fact, it adds inertia and of course some module owners will not want to offer their modules for free else where (they might offer them else where not for free, which again would not hurt Copute.com's business model, but rather add inertia to it).
Module Improvement Ownership
But who owns the module when someone other than the original owner improves it (bug fix, refinement, etc)? I can't think of a metric of value for such improvements to modules. The best idea I have so far is to have a very flexible versioning scheme, give minimally 1% of "remaining value" for each fix, and give the option for existing owners of a module to award an increase to that minimum. Let me explain.
A module is created and offered by the owner(s) on Copute.com. This is open source, meaning anyone could offer an improvement. So how do we assign the ownership percentage of the improved version of the module? On the first improvement, the original owner has 100% share of ownership, so we minimally and automatically award 1% of that to the submitter of the improved version. The original owner has the option of login to his account on Copute.com and increase that minimum award (only the existing owners will best have the knowledge to evaluate an improvement and decide its value). So then there are two owners, minimally the split is 99% and 1%. Then a second improvement is made, so minimally 1% of 99% is automatically awarded, which is 0.99%. And this 1% is split proportionally so the first owner has 98.02%, second has 0.99%, and third has 0.99%. Either of the two prior owners can optionally award more, giving up some of their percent. Eventually for example, it may reach where original owner has say 50%, and each improvement owner has (minimally) 0.5%. The point is that future improvements slow down the erosion of prior ownership, unless the prior owners choose to award higher value. This is desirable because of the law of diminishing returns (80/20 rule).
Note that projects (e.g. Linux, Firefox, etc) tend to have 1000s of bug fixes and improvements, thus the original creators would be diluted to near 0% on a project-scale granularity if this model was applied, but within a project, the modules typically only require dozens of fixes/improvements, so this model encourages original owners to create smaller (more granular, thus more) reusable modules. So we see the benefit of smaller things and reusability scales economically well; whereas, project-level granularity does not (can not be fungible).
Also note that the above concept is economic ownership, i.e. the percentage of royalties, not control. If a module is released on Copute.com marketplace, the control is relinquished to the terms of the license which allows unlimited modifications and forks but requires them to reside on the Copute.com marketplace, not even Copute.com has any centralized decision control. The original owner of the module might release the original module else where with different licensing terms (Copute.com marketplace is non-exclusive), but that owner can not release the improvements else where and neither can the submitter of the improvement release the improvement else where (unless making the improvement to a identical fork in a different marketplace of the same module). So we see that Copute.com's forks potentially become unique from the forks in other marketplaces, unless all submitters of improvements submit to all marketplaces.
Versioning and Forking
Copute language, and thus the complementary (and orthogonal) marketplace on Copute.com, is all about freedom-of-choice. Thus we don't want to limit in any way. So submitters of improvements should be free to omit selective prior improvements, i.e. to create a new fork in the version trail of the module. I have an idea for a version numbering scheme that accommodates this with explicit enumeration of omissions. A version number will be numbers separated by periods, e.g. "1.0.1", where each period identifies a fork. So the first version of a module is simply "1". If an improvement is made to that module, then the version becomes "2", the next "3", etc.. If an improvement omits improvement "2" and keeps "3", then the version number is "1:3.1". If an improvement omits improvement "3" forward, then the version number is "2.1". If an improvement omits improvement "2" and keeps "3" through "5", then the version number is "1:3-5.1". If an improvement omits improvement "3" and keeps "4" through "5", then the version number is "2:4-5.1", which is equivalent shorthand for "1-2:4-5.1". The ".1" becomes a new fork and then it can also be forked, so version numbers can end up with multiple periods.
Note that when a submitter proposes to create an improvement without forking, this will initially be created as a fork, then it must be accepted by one of the owners of the trunk, in order to become a numbered as a trunk improvement instead of a fork. This is necessary so that owners don't have to create forks to indicate they don't approve of an improvement.
Note there is no automated way to precisely identify that a particular improvement actually includes and omits the improvements it says it does. Thus it is allowed to have two forks of the same of fork, and this is done using a "..n.1", e.g. "1:3..1.1" and "1:3..2.1" would be an alternative forks of "1:3.1".
Note this proposed module version numbering scheme bears almost no correlation of semantics to existing project version numbering schemes that also employ periods.
So which version will people use if there are multiple forks (which tip of which branch of a metaphorical tree)? They will use which ever fork they think best fits their needs. The popularity of use of forks in derivative software, will be a vote on the fitness, i.e. the free market will anneal the answer. The module owner(s) may endorse a particular fork by making improvements in it. This is superior to the current open source model where forking is highly discouraged and the project owners (current languages don't support module fine granularity of reuse) are (hopefully "benevolent") dictators "for life" (Linus Torvalds and others have been referred to with the quoted title).
[1] Salaries are those of the company developing the derivative software product, not any salaries that were paid to develop the modules that are offered on Copute.com.
Last edited by Shelby on Wed Mar 27, 2013 12:06 am; edited 2 times in total
Social networking is driven by group formation
Everybody wants to belong to clique that they identify with:
http://blogs.forbes.com/chunkamui/2011/07/15/why-google-is-poised-to-fail/
http://blogs.forbes.com/chunkamui/2011/01/12/why-facebook-beat-myspace-and-why-myspaces-revised-strategy-will-probably-fail/
http://blogs.forbes.com/chunkamui/2011/07/15/why-google-is-poised-to-fail/
http://blogs.forbes.com/chunkamui/2011/01/12/why-facebook-beat-myspace-and-why-myspaces-revised-strategy-will-probably-fail/
Scala's abstract type doesn't solve the Expression Problem, but interfaces do
A research paper by the creator of Scala explains how Scala's abstract type can enable extension simultaneously in the subtype and method dimensions.
However, my understanding is that it doesn't really provide a solution, and interfaces do. The research paper implies that if Exp1 extends Exp and adds a show method, then a class Plus1( l:Exp1, r:Exp1 ) extends Plus( l, r ) with Exp1 would not work because either the compiler would complain, or two instances of l and r would need to be saved in the class, an Exp copy in Plus and an Exp1 copy in Plus1. As far as I can see, the proposed solution using an abstract type exp isn't really a solution to the Expression Problem, because the abstract type needs to be closed before it can be referenced by legacy code that would be extended by Plus1. Thus I don't see how legacy code can be extended with new methods without recompilation with this paradigm, to satisfy the Expression Problem.
Instead I would suggest that Plus1 does not need to extend Plus (any shared implementation could be inherited via a mixin), because consumers of Plus and Plus1 should be inputting an interface Exp and Exp1 respectively and never a class Plus or Plus1. Thus a Plus1 can interopt with legacy code that expects an Exp.
This is another example of why Copute will not allow implementation (e.g. class or mixin) to be referenced-- only interfaces should be referenceable. This is yet another evidence that this is a major breakthrough design decision for Copute.
==========
My analysis thus far is that Scala's explicitly typed self references which are employed in the "Functional Decomposition" section of the above research paper, are also a complication which become unnecessary if interface and implementation inheritance are unconflated, i.e. for the same reason explained above. Interfaces can be employed to provide orthogonal, flattened abstraction of extensible implementation, when all references (e.g. function inputs and outputs) are interfaces, without binding to the inheritance of implementation. The general conclusion of my current understanding is that the complexity of Scala's abstract typing is due to supporting extension with conflation of interface and implementation inheritance. Remove the conflation as Copute proposes, then these complexities in the type system can be discarded. This should make Copute code much more intuitive, less verbose, and eliminate these examples of complex typing hierarchies to support open extension.
Future examples will elucidate these claims.
However, my understanding is that it doesn't really provide a solution, and interfaces do. The research paper implies that if Exp1 extends Exp and adds a show method, then a class Plus1( l:Exp1, r:Exp1 ) extends Plus( l, r ) with Exp1 would not work because either the compiler would complain, or two instances of l and r would need to be saved in the class, an Exp copy in Plus and an Exp1 copy in Plus1. As far as I can see, the proposed solution using an abstract type exp isn't really a solution to the Expression Problem, because the abstract type needs to be closed before it can be referenced by legacy code that would be extended by Plus1. Thus I don't see how legacy code can be extended with new methods without recompilation with this paradigm, to satisfy the Expression Problem.
Instead I would suggest that Plus1 does not need to extend Plus (any shared implementation could be inherited via a mixin), because consumers of Plus and Plus1 should be inputting an interface Exp and Exp1 respectively and never a class Plus or Plus1. Thus a Plus1 can interopt with legacy code that expects an Exp.
This is another example of why Copute will not allow implementation (e.g. class or mixin) to be referenced-- only interfaces should be referenceable. This is yet another evidence that this is a major breakthrough design decision for Copute.
==========
My analysis thus far is that Scala's explicitly typed self references which are employed in the "Functional Decomposition" section of the above research paper, are also a complication which become unnecessary if interface and implementation inheritance are unconflated, i.e. for the same reason explained above. Interfaces can be employed to provide orthogonal, flattened abstraction of extensible implementation, when all references (e.g. function inputs and outputs) are interfaces, without binding to the inheritance of implementation. The general conclusion of my current understanding is that the complexity of Scala's abstract typing is due to supporting extension with conflation of interface and implementation inheritance. Remove the conflation as Copute proposes, then these complexities in the type system can be discarded. This should make Copute code much more intuitive, less verbose, and eliminate these examples of complex typing hierarchies to support open extension.
Future examples will elucidate these claims.
OOP in Haskell
The description of Haskell's typing system is complex, but the conclusions with respect to the Expression Problem are not. Type construction (i.e. the constructor type name and any constructor argument values as type members) is orthogonal to type methods (i.e. interface) in Haskell. Thus any preexisting constructable type can through extension be declared to implicitly inherit from any interface, but from the perspective of the interface these are just new subtypes. So in Haskell to be extensible and solve the Expression Problem, functions should input interfaces instead of constructable types, but this is not enforced, so interface and implementation (i.e. constructable types) can be conflated. So in terms of the Expression Problem, Haskell is nearly equivalent in power and flexibility to a subtyped OOP program which only references interfaces and never implementation (i.e. class or mixin), except that the use of pattern matching on function inputs in Haskell is not extensible, because Haskell pattern matches only on constructable types and not "interfaces" (i.e. interface is a typeclass). See also What is Pattern Matching? Also Haskell can't pattern match (guard) on the instances of an "interface".
Note however that a subtyped OOP program which references implementation, is not as extensible as Haskell, because then preexisting data types can not be given new interfaces. If implementation is never referenced, then data types that may contain implementation (e.g. class or mixin) can be given new interfaces in terms of the Expression Problem, via multiple inheritance of the data type and the new interface in a new data type, because the supertype data type was never referenced. Thus Scala's implicit conversion functions[6] defeat subtyping (as a solution to the Expression Problem) because they reference implementation by always return the same constructable data type (which is the implicit substitution for the input type), for no other gain in flexibility and expressive power (when Copute's static interface methods are available to emulate Haskell's typeclass interfaces), as compared to an OOP program that does not reference implementation.
[1] The Haskell 98 Report, section 4.2.1 Algebraic Datatype Declarations
[2] A Gentle Introduction to Haskell, Version 98, section 5 Type Classes and Overloading
[3] The Haskell 98 Report, section 4.6 Kind Inference
[4] A Gentle Introduction to Haskell, Version 98, section 6.2 Field Labels
[5] The Haskell 98 Report, section 5.8 Abstract Datatypes
[6] Programming in Scala, Odersky, Spoon, Venners, chapter 21 Implicit Conversions and Parameters.
[7] Modules Matter Most, Existential Type blog, Robert Harper.
[8] Modular Type Classes, Dreyer, Harper, Chakravarty
[9] Software Extension and Integration with Type Classes, by Ralf Lämmel and Klaus Ostermann
Note however that a subtyped OOP program which references implementation, is not as extensible as Haskell, because then preexisting data types can not be given new interfaces. If implementation is never referenced, then data types that may contain implementation (e.g. class or mixin) can be given new interfaces in terms of the Expression Problem, via multiple inheritance of the data type and the new interface in a new data type, because the supertype data type was never referenced. Thus Scala's implicit conversion functions[6] defeat subtyping (as a solution to the Expression Problem) because they reference implementation by always return the same constructable data type (which is the implicit substitution for the input type), for no other gain in flexibility and expressive power (when Copute's static interface methods are available to emulate Haskell's typeclass interfaces), as compared to an OOP program that does not reference implementation.
- Data type class ('data') has constructors[1], and methods are declared orthogonally as an 'instance' (subtype) of a Typeclass ('class') interface.[2][9] Haskell uses the keyword 'data' for a data type class (implementation) and 'class' for an interface.
- The explicit parametric polymorphism of a data type class is kind 0 (i.e. none) or 1.[1] Note 'tyvar' of 'simpletype' has arity >= 0, so 0 is kind 0, and > 0 is kind 1. Haskell's type context class/data A a => B a is roughly trait/class B[a <: A] in Scala. An implicit higher-kind is declared with two or more 'tyvar' type parameters where one is the type parameter of the other in a constructor argument[3] (also the kind of an instance of an interface is inferred, see Functor example[2]). It is not possible to declare that a future subtype of 'tycon' must be a kind > 0, because there is no subtyping of data type class (the constructors are the possible subtypes and it is final). Any data type class can be declared to be the 'instance' (subtype) of a 'typeclass' interface at any time, thus a data type class has infinite potential supertypes[9]. Note that the 'context' is a supertype bound applied to a type parameter, but it is not exclusive, meaning the data type class of the type parameter implements (is an 'instance' of) that interface, but may also implement infinite others, i.e. any data type can be coerced to any interface via an 'instance' declaration (i.e. an implicit conversion). Any function that inputs a data type class (1st letter uppercase in Haskell), can not be extended per the Expression Problem. Any function that inputs a type parameter without an interface context, is implicitly structurally polymorphic and solves the Expression Problem with implicit structural subtyping. Any function that inputs a type parameter constrained by an interface context, is explicitly (requires the type to a declared 'instance' of the interface) structurally polymorphic and solves the Expression Problem with explicit structural subtyping.
Unlike subtyped OOP, Haskell allows a new interface to be implemented by a preexisting data typeclass (without recompiling it)[9], because the declaration of methods (with 'instance') is orthogonal to the declaration of a data type class. However, functions which wish to be extensible in the Expression Problem will not input data type classes, but rather interfaces. And the supertype(s) (i.e. context) of a Haskell Typeclass interface are final. Thus adding an interface to a preexisting data type class in Haskell, is analogous to creating a new data type in OOP and inheriting the preexisting implementation (class or mixin), and the new interface which inherits from the old interface, because no code should be inputting the preexisting data type any way (remember "constructors are evil", always input a factory method). Btw, this is why I think Scala's implicit conversion functions are unnecessary, and also dangerous because they defeat subtyping. - Constructor arguments are immutable data members of the class[1] which can be accessed by position with pattern matching. The lack of argument names gives no hint as to the purpose of the arguments (e.g. data Point a = Point a a, means the 2 coordinates have same type, but doesn't hint that they are x,y cartesian coordinates), see next item for a solution.[4]
- Constructor arguments may optionally be accessed with named getter methods.[4]
- Private constructors employing 'module' and 'data'.[5]
- Mixins employing 'module' and 'newtype' where each method must be explicitly inherited.[5]
- A Haskell interface (typeclass) may contain a method with a default implementation (and it may call the other abstract methods of the interface), thus Haskell conflates implementation and interface.
- An interesting insight from #2 and #3 above, that Haskell has a weakness that does not exist in subtyped OOP, is that since functions that input data type classes are not extensible in the Expression Problem, thus a function is not extensible if it has input(s) that are pattern matched on constructor argument(s), unless we (perhaps use View Patterns would work or) provide a Typeclass interface to them and input the Typeclass interface instead, e.g.
- Code:
class IColor c where
red :: MinMaxValue a => c a -> a // note Haskell requires the type parameters in the interface to have same
green :: MinMaxValue a => c a -> a // context as in the instance, which means Haskell is not SPOT for bounds
blue :: MinMaxValue a => c a -> a // on type parameters, see section 7.3 of Generics of a Higher Kind
data Color a = Red | Green | Blue | RGB {r, g, b :: a}
instance MinMaxValue a => IColor Color where
red (Red a) = maxval a
red (Green a) = minval a
red (Blue a) = minval a
green (Red a) = minval a
green (Green a) = maxval a
green (Blue a) = minval a
blue (Red a) = minval a
blue (Green a) = minval a
blue (Blue a) = maxval a
red (RGB r _ _) = r
green (RGB _ g _) = g
blue (RGB _ _ b) = b
- Haskell does not allow to declare the same data type as inheriting multiple different instances of the same interface (typeclass), i.e. Haskell allows many supertype interfaces but they must all be a unique interface.[7][8] (note, a typeclass can be multiply inherited if it contains no methods, e.g. class Exp) Thus an Int can't implement an Ord(ering) in multiple ways, i.e. Int can't be an instance of Ord and OrdDivisible, if OrdDivisible inherits from Ord, thus functions that would normally compose on Ord, have to compose separately on Int and IntDivisible. What a mess. In short, interfaces can multiple inherit, but the diamond problem isn't allowed.
[1] The Haskell 98 Report, section 4.2.1 Algebraic Datatype Declarations
[2] A Gentle Introduction to Haskell, Version 98, section 5 Type Classes and Overloading
[3] The Haskell 98 Report, section 4.6 Kind Inference
[4] A Gentle Introduction to Haskell, Version 98, section 6.2 Field Labels
[5] The Haskell 98 Report, section 5.8 Abstract Datatypes
[6] Programming in Scala, Odersky, Spoon, Venners, chapter 21 Implicit Conversions and Parameters.
[7] Modules Matter Most, Existential Type blog, Robert Harper.
[8] Modular Type Classes, Dreyer, Harper, Chakravarty
[9] Software Extension and Integration with Type Classes, by Ralf Lämmel and Klaus Ostermann
Last edited by Shelby on Tue Dec 06, 2011 8:03 am; edited 34 times in total
re: Types are Anti-Modular
http://gbracha.blogspot.com/2011/06/types-are-anti-modular.html?showComment=1311534787150#c2109159097718758997
Shelby wrote:For independent compilation (and design) of typed interfaces, modules can declare their interfaces independently, then a glue module coerces their interfaces to interopt.
Imagine a client that integrates a test server. The interface of the test server is the external interface of the client.
Or in other words, the boundary between intra-module modularity and inter-module modularity is arbitrary and thus doesn't exist.
All type systems leak implementation to the extent that the semantic invariants are not checked by the compiler. Checking more invariants (stronger typing) is an attempt to leak less implementation, to catch more errors at compile-time.
P.S. the failure of an arbitrary barrier to the free market is expected by Coase's Theorem. Coase's Theorem is a special case of 2nd law of thermo, and the 2nd law states the the universe trends to maximum independent (more precisely orthogonal) actors (i.e. disorder/entropy).
Bool is blindness
Don't throw away the context by creating a Bool, pass the context around instead:
http://existentialtype.wordpress.com/2011/03/15/boolean-blindness/
And equality is not decidable for all types:
http://existentialtype.wordpress.com/2011/03/15/dont-mention-equality/
http://existentialtype.wordpress.com/2011/03/15/boolean-blindness/
And equality is not decidable for all types:
http://existentialtype.wordpress.com/2011/03/15/dont-mention-equality/
Monad/Comonad duality, also lazy/eager duality
http://blog.sigfpe.com/2006/06/monads-kleisli-arrows-comonads-and.html?showComment=1311779802514#c3359419049328160234
Shelby Moore III wrote:
Edit#1: The word "compositionality" can refer to the principle that the meaning of the terms of the denotational semantics, e.g. the Comonad model, should depend only on the meaning of the fragments of the syntax it employs, i.e. the subterms. What I understand from a research paper[1], is that the compositional degrees-of-freedom of the higher-level language created by the higher-level denotational semantics, is dependent on the "free variables" in the compositionality fragments. Thus the compositionality can be affected by the evaluation order and other operational semantics. Due to the Halting Problem, where the lower-level semantics is Turing complete, the subterms will never be 100% context-free. I have proposed that when the higher-level semantics unifies lower-level concepts, compositionality is advanced. Please see the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics at http://copute.com for more explanation.
[1] Declarative Continuations and Categorical Duality, Filinski, section 1.4.1, The problem of direct semantics
Edit#2: The composition of functions which do not input a comonad, with those impure ones that do, can be performed with the State monad. The comonad method, Comonad[T] -> T is impure (see explanation in the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics -> Category Theory at http://copute.com), so we must thread it across functions which might be pure, using the State monad. Thus the answer to my last question is "correct", we can purely compose any pure functions which input a Comonad, because the comonad method, (Comonad[T] -> A) -> Comonad[T] -> Comonad[A] is pure if is the input function, Comonad[T] -> A. Also the answer to my other question is "incorrect", because a monad can abstract a comonad, but only for the history of observations (see explanation at same section) because a monad has no interface for creating a new observation.]
Shelby Moore III wrote:
Shelby Moore III wrote:
Monad is the model for any parametric type that we know the generative structure of, so we can compose functions on lifted outputs, because the type knows how to lift (i.e. construct, generate, 'unit' or 'return') instances of its type parameter to its structure.
Comonad is the model for any parametric type that we don't know how to generate its structure, but we can observe instances of the type parameter its structure as they occur. We will only know its final structure when it is destructed, and observation ceases. We can't lift instances of its type parameter to its structure, so we can't compose functions on outputs. Instead, we can compose functions with lifted inputs (and optionally outputs, i.e. map on observations), because the type has observations.
Conceptually monad vs. comand duality is related to the duality of induction vs. coinduction, and initial vs. final (least vs. greatest) fixpoint, because we can generate structure for a type that has an initiality, but we can only observe structure until we reach a finality.
Induction and Co-induction
Initiality and Finality
Wikipedia Coinduction
I had visited this blog page before (and not completely grasped it), then I read this page again trying conceptualize the sum vs. products duality for eager vs. lazy evaluation.
Perhaps I am in error, but it appears that with lazy evaluation and corecursion, monad can be used instead of comonad, e.g. isn't it true a stream can be abstracted by a monadic list in haskell?
So dually, am I correct to interpret that laziness isn't necessary for modeling compositionality of coinductive types, when there is comonad in the pure (referential transparent) part where the composition is?
Edit#1: The word "compositionality" can refer to the principle that the meaning of the terms of the denotational semantics, e.g. the Comonad model, should depend only on the meaning of the fragments of the syntax it employs, i.e. the subterms. What I understand from a research paper[1], is that the compositional degrees-of-freedom of the higher-level language created by the higher-level denotational semantics, is dependent on the "free variables" in the compositionality fragments. Thus the compositionality can be affected by the evaluation order and other operational semantics. Due to the Halting Problem, where the lower-level semantics is Turing complete, the subterms will never be 100% context-free. I have proposed that when the higher-level semantics unifies lower-level concepts, compositionality is advanced. Please see the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics at http://copute.com for more explanation.
[1] Declarative Continuations and Categorical Duality, Filinski, section 1.4.1, The problem of direct semantics
Edit#2: The composition of functions which do not input a comonad, with those impure ones that do, can be performed with the State monad. The comonad method, Comonad[T] -> T is impure (see explanation in the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics -> Category Theory at http://copute.com), so we must thread it across functions which might be pure, using the State monad. Thus the answer to my last question is "correct", we can purely compose any pure functions which input a Comonad, because the comonad method, (Comonad[T] -> A) -> Comonad[T] -> Comonad[A] is pure if is the input function, Comonad[T] -> A. Also the answer to my other question is "incorrect", because a monad can abstract a comonad, but only for the history of observations (see explanation at same section) because a monad has no interface for creating a new observation.]
Shelby Moore III wrote:
Followup to the two questions in my prior comment.
Monad can't abstract a comonad, because it has no method, m a -> a, for creating a new observation. A monad can abstract the history of prior observations. Afaics, for a language with multiple inheritance, a subtype of comonad could also be a subtype of monad, thus providing a monadic interface to the history of observations. This is possible because the comonad observation factory method, m a -> a, is impure (the state of the comonad blackbox changes when history is created from it).
Composition of functions, m a -> b, which input a comonad is pure (i.e. no side-effects, referentially transparent, declarative not imperative) where those functions are pure (e.g. they do not invoke m a -> a to create a new observation). In short, the method (m a -> b) -> m a -> m b is pure if m a -> b is.
Last edited by Shelby on Sat Jul 30, 2011 1:01 pm; edited 7 times in total
Call-by-need memoizes arguments, not functions
http://augustss.blogspot.com/2011/04/ugly-memoization-heres-problem-that-i.html#3700840423100518476
Shelby Moore III wrote:
Shelby Moore III wrote:
@francisco: Haskell's call-by-need (lazy evaluation) memoizes function arguments, but not functions.
=====verbose explanation======
The arguments to a function are thunked, meaning the arguments get evaluated only once, and only when they are needed inside the function. This is not the same as checking if a function call was previously called with the same arguments.
If the argument is a function, the thunk will call it once without checking if that function had been called else where with the same arguments.
Thunks are conceptually similar to parameterless anonymous functions with a closure on the argument, a boolean, and a variable to store the result of the argument evaluation. Thus thunks incur no lookup costs, because they are parameterless. The cost of the thunk is the check on the boolean.
Thunks give the same amount of memoization as call-by-value (which doesn't use thunks). Neither call-by-need nor call-by-value memoize function calls. Rather both do not evaluate the same argument more than once. Call-by-need delays that evaluation with a thunk until the argument is first used within the function.
Apologies for being so verbose, but Google didn't find an explanation that made this clear, so I figured I would cover all the angles in this comment.
Eager vs. Lazy evaluation
See also:
https://goldwetrust.forumotion.com/t112p165-computers#4430
========================================
http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323
Shelby Moore III wrote:
https://goldwetrust.forumotion.com/t112p165-computers#4430
========================================
http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323
Shelby Moore III wrote:
Appreciated this article. Some points:
1. Lazy also has latency indeterminism (relative to the imperative world, e.g. IO monad).
2. For a compiler strategy that dynamically subdivided map for parallel execution on multiple cores requires it not be lazy.
3. any = or . map is not "wrong" for eager (strict) evaluation when there is referential transparency. It is slower in sequential time, but maybe faster in parallel execution.
4. Wikipedia says that for Big O notation, 0(n) is faster than O(n log n) = O(log n!). @Lennart, I think you meant to say that O(n) is faster than O(n log n).
5. Given that laziness causes space and latency indeterminism, if the main reason to use lazy is to avoid the performance hit for conjunctive functional composition over functors, then only functions which output applicable functors need apply laziness. As @martin (Odersky) suggested, provide lazy and eager versions of these functions. Thus eager by default with optional lazy annotations would be preferred.
Principle and 80/20 rule. 80+% of the programmers in world are not likely to grasp debugging lazy space leaks. It will only take one really difficult one to waste a work-week, and that will be the end of it. And what is the big payoff, especially with the parallelism freight train bearing down? Someone claimed that perhaps the biggest advance in mainstream languages since C, was GC (perhaps Java's main improvement over C++). Thus, if the typical programmer couldn't do ref counting without creating cycles, I don't think they will ever grasp lazy space and latency indeterminism. I am approaching this from wanting a mainstream language which can replace Java for statically typed.
Am I correct to say?
RT eager code evaluated as RT lazy could exhibit previously unseen space and latency issues.
RT lazy code evaluated as RT eager could exhibit previously unseen non-termination, e.g. infinite recursion and exceptions.
@augustss Rash judgments w/o experience is annoying. Two decades of programming, and copious reading is all I can humbly offer at this moment. This is imperfect and warrants my caution. I appreciate factual correction.
My point is that with eager, debugging the changes in the program's state machine at any function step, will be bounded to the function hierarchy inside the body of the function, so the programmer can correlate changes in the state machine to what the function is expected to do.
Whereas, with lazy any function may backtrack into functions that were in the argument hierarchy of the current function, and inside functions called an indeterminant time prior. Afaics, lazy debugging should be roughly analogous to debugging random event callbacks, and reverse engineering the state machine in a blackbox event generation module.
As I understand from Filinksi's thesis, eager and lazy are categorical duals in terms of the induction and coinductive values in the program. Eager doesn't have products (e.g. conjunctive logic, "and") and lazy doesn't have coproducts (e.g. disjunctive, "or"). So this means that lazy imposes imperative control logic incorrectness from the outside-in, because coinductive types are built from observations and their structure (coalgebra) is not known until the finality when the program ends. Whereas, eager's incorrectness is from the inside-out, because inductive types have a a priori known structure (algebra) built from an initiality. Afaics, this explains why debugging eager has a known constructive structure and debugging lazy is analogous to guessing the structure of a blackbox event callback generation module.
My main goal is to factually categorize the tradeoffs. I am open to finding the advantages of lazy. I wrote a lot more about this at my site. I might be wrong, and that is why I am here to read, share, and learn. Thanks very much.
Could you tell me why lazy (with optional eager) is better for you than referentially transparent (RT) eager (with optional lazy), other than the speed of conjunctive (products) functional composition? Your lazy binding and lazy functions points should be doable with terse lazy syntax at the let or function site only, in a well designed eager language. Infinite lazy types can be done with the optional lazy in an eager language with such optional lazy syntax. Those seem to be superficial issues with solutions, unlike the debugging indeterminism, which is fundamental and unavoidable.
I wish this post could be shorter and still convey all my points.
Idea: perhaps deforestation could someday automate the decision on which eager code paths should be evaluated lazily.
This would perhaps make our debate mute, and also perhaps provide the degrees-of-freedom to optimize the parallel and sequential execution time trade-off at runtime. I suppose this has been looked at before. I don't know if this is not possible, because I haven't studied deeply enough the research on automated deforestation.
I understand the "cheap deforestation" algorithm in Haskell only works on lazy code paths, and only those with a certain "foldr/build" structure.
Perhaps an algorithm could flatten to their bodies, the function calls in eager referentially transparent (i.e. no side-effects) code paths but in lazy order until a cyclical structure is identified, then "close the knot" on that cyclical structure. Perhaps there is some theorem that such a structure is bounded (i.e. "safe") in the transformation of coproducts to products correctness, i.e. space and latency determinism.
Your any = or . map example flattens to a cycle (loop) on each element of the functor (e.g. list), which converts each element to a boolean, exits if the result is true, and always discards the converted element in every possible code path of the inputs. That discard proves (assuming RT and thus no side-effects in map) the lazy evaluation of the eager code has no new coproducts, and thus the determinism in space and latency is not transformed.
There are discarded products (map did not complete), thus in a non-total language, some non-termination effects may be lost in the transformation. Any way, I think all exceptions should be converted to types, thus the only non-termination effects remaining are infinite recursion.
There is the added complication that in some languages (those which enforce the separation-of-concerns, interface and implementation), the map in your example may be essentially a virtual method, i.e. selected at runtime for different functors, so the deforestation might need to be a runtime optimization.
@augustss Are you conflating 'strict' with 'not referentially transparent'? I think that is a common misperception because there aren't many strict languages which are also RT. So the experience programmers have with strict languages is non-compositional due to the lack of RT. Afaik, composition degrees-of-freedom is the same for both strict and lazy, given both are RT. The only trade-off is with runtime performance, and that trade-off has pluses and minuses on both sides of the dual. Please correct me if I am wrong on that.
@augustss thanks I now understand your concern. The difference in non-termination evaluation order with runtime exceptions is irrelevant to me, because I proposed that by using types we could eliminate all exceptions (or at least eliminate the catching them from the declarative RT portion of the program). I provide an example of how to eliminate divide-by-zero at copute.com (e.g. a NonZero type that can only be instantiated from case of Signed, thus forcing the check at compile-time before calling the function that has division). A runtime exception means the program is in a random state, i.e. that the denotational semantics is (i.e. the types are) not fit to the actual semantics of the runtime. Exceptions are the antithesis of compositional regardless of the evaluation order.
In my opinion, a greater problem for extension with Haskell (and Scala, ML, etc) is afaics they allow conflation of interface and implementation. That is explained at my site. I have been rushing this (80+ hour weeks), so I may have errors.
Another problem for extension is Haskell doesn't have diamond multiple inheritance. Perhaps you've already mentioned that.
Modular Type Classes, Dreyer, Harper, Chakravarty.
@augustss agreed must insure termination for cases which are not bugs, but infinite recursion is a bug in strict, thus I excluded it for comparing compositionality. No need to go 100% total in strict to get the same compositionality as lazy. Perhaps Coq can insure against infinite recursion bug, but that is an orthogonal issue.
Diamond problem impacts compositionality, because it disables SPOT (single-point-of-truth). For example, Int can't be an instance of Ord and OrdDivisible, if OrdDivisible inherits from Ord. Thus functions that would normally compose on Ord, have to compose separate on Int and IntDivisible, thus do not compose.
@augustuss if 'map pred list' doesn't terminate, it is because list is infinite. But infinite lists break parallelism and require lazy evaluation, so I don't want them as part of a default evaluation strategy. Offering a lazy keyword (i.e. type annotation or lazy type) can enable expression of those infinite constructions, but in my mind, it should discouraged, except where clarity of expression is a priority over parallelism. Did I still miss any predominant use case?
Thus, I concur with what Existential Type replied. For example, pattern match guards for a function, is runtime function overloading (splitting the function into a function for each guard), thus the compile shouldn't evaluate the functions (guard cases) that are not called.
It appears to me (see my idea in a prior comment) that lazy is a ad-hoc runtime non-deterministic approximation of deforestation. We need better automatic deforestation aware algorithms for eager compilers, so the parallelism vs. sequential execution strategy is relegated to the backend and not the in the language.
Last edited by Shelby on Thu Aug 25, 2011 12:49 pm; edited 27 times in total
Computer Assisted Learning is capital's attempt to enslave mankind
I didn't write this, but I agree with it. That is not to say that I don't think some computer-assisted learning can't be useful, but rather that it must be assisted by real human teachers. This is related to the post I made some time ago, refuting the idea that computers could replace humans.
http://www.soc.napier.ac.uk/~cs66/course-notes/sml/cal.htm
http://www.soc.napier.ac.uk/~cs66/course-notes/sml/cal.htm
CAL Rant
The user should have control at all times, you are not forced to go through the material in any particular order and you are expected to skip the dull bits and miss those exercises which are too easy for you. You decide. The author does not believe that CAL is a good way to learn. CAL is a cheap way to learn, the best way to learn is from an interactive, multi functional, intelligent, user friendly human being. The author does not understand how it is that we can no longer afford such luxuries as human teachers in a world that is teeming with under-employed talent. His main objection to CAL is that it brings us closer to "production line" learning. The production line is an invented concept, it was invented by capital in order to better exploit labour. The production line attempts to reduce each task in the manufacturing process to something so easy and mindless that anybody can do it, preferably anything. That way the value of the labour is reduced, the worker need not be trained and the capitalist can treat the worker as a replaceable component in a larger machine. It also ensures that the workers job is dull and joyless, the worker cannot be good at his or her job because the job has been designed to be so boring that it is not possible to do it badly or well, it can merely be done quickly or slowly. Production line thinking has given us much, but nothing worth the cost. We have cheap washing machines which are programmed to self destruct after five years; cars, clothes, shoes - all of our mass produced items have built in limited life spans - this is not an incidental property of the production line, it is an inevitable consequence.
The introduction of CAL is the attempt by capital to control the educators. By allowing robots to teach we devalue the teacher and make him or her into a replaceable component of the education machine. I do not see how such a dehumanizing experience can be regarded as "efficient", the real lesson learned by students is that students are not worth speaking to, that it is a waste of resources to have a person with them. The student learns that the way to succeed is to sit quietly in front of a VDU and get on with it. The interaction is a complete sham - you may go down different paths, but only those paths that I have already thought of, you can only ask those questions which I have decided to answer. You may not challenge me while "interacting". I want students to contradict, to question, to object, to challenge, to revolt, to tear down the old and replace with the new.
Do not sit quietly and work through this material like a battery student. Work with other people, talk to them, help each other out.
OOP in Standard ML (SML)
My prior email needs followup clarification.
From section 2.1 of your paper[1], I assume a functor's 'sig' or 'struct' can recurse, e.g.
I not yet learned how 'map' can be made polymorphic on b, independently of the creation of a concrete instance of f.
With limited study, I infer that 'structure' is a concrete data type where all abstract 'type' and methods must be defined and implemented. That 'signature' is an abstract type to the extent that it has inner abstract 'type' and unimplemented method signature(s). And that 'functor' is a higher kind constructor (for abstract 'sig' or concrete 'struct'), i.e. type parametrization.
Thus roughly the following informal correspondence to Scala:
signature = trait
functor,,,(...) sig = trait,,,[...]
structure = mixin
functor,,,(...) struct = mixin,,,[...]
using ... in ,,, = class ,,, extends ...
So it appears Scala is more generalized than Haskell, and perhaps equivalently so to SML (this would need more study). Scala also has type bounds in the contravariant direction and variance annotations (i.e. these would be the functor parameters in SML), as well as a bottom type Nothing.
Fyi, Iry's popular chart needs correction. Haskell and ML are higher-kinded.
http://james-iry.blogspot.com/2010/05/types-la-chart.html
> ml has higher kinds only through the module system. the class you mention
> is the type of a functor in ml. so, yes, we have this capability, but in
> a different form. the rule of thumb is anything you can do with type
> classes in haskell is but an awkward special case of what you can do with
> modules in ml. my paper "modular type classes" makes this claim
> absolutely precise.
It appears my suggested separation of abstract interface and concrete implementation, would require that sig must not reference any structure nor functor struct, and functions must only reference signature or functor sig.
For Copute, one of the planned modifications of Scala, is trait and functions can only reference traits.
Of course it is given that everything can reference the function constructor (->).
One example of the benefit of such abstraction, is for example if we reference the red, green, blue values of a color type, we are calling interface methods, not matching constructor values. Thus subtypes of color may implement red,green,blue orthogonal to their constructor values.
I am fairly sleepy, I hope this was coherent.
[1] Modular Type Classes, Dreyer, Harper, Chakravarty, section 2.1 Classes are signatures, instances are modules
From section 2.1 of your paper[1], I assume a functor's 'sig' or 'struct' can recurse, e.g.
- Code:
functor f( a : A ) sig
type b
val map : (f(a) -> b) -> f(a) -> f(b)
end
I not yet learned how 'map' can be made polymorphic on b, independently of the creation of a concrete instance of f.
With limited study, I infer that 'structure' is a concrete data type where all abstract 'type' and methods must be defined and implemented. That 'signature' is an abstract type to the extent that it has inner abstract 'type' and unimplemented method signature(s). And that 'functor' is a higher kind constructor (for abstract 'sig' or concrete 'struct'), i.e. type parametrization.
Thus roughly the following informal correspondence to Scala:
signature = trait
functor,,,(...) sig = trait,,,[...]
structure = mixin
functor,,,(...) struct = mixin,,,[...]
using ... in ,,, = class ,,, extends ...
So it appears Scala is more generalized than Haskell, and perhaps equivalently so to SML (this would need more study). Scala also has type bounds in the contravariant direction and variance annotations (i.e. these would be the functor parameters in SML), as well as a bottom type Nothing.
Fyi, Iry's popular chart needs correction. Haskell and ML are higher-kinded.
http://james-iry.blogspot.com/2010/05/types-la-chart.html
> ml has higher kinds only through the module system. the class you mention
> is the type of a functor in ml. so, yes, we have this capability, but in
> a different form. the rule of thumb is anything you can do with type
> classes in haskell is but an awkward special case of what you can do with
> modules in ml. my paper "modular type classes" makes this claim
> absolutely precise.
It appears my suggested separation of abstract interface and concrete implementation, would require that sig must not reference any structure nor functor struct, and functions must only reference signature or functor sig.
For Copute, one of the planned modifications of Scala, is trait and functions can only reference traits.
Of course it is given that everything can reference the function constructor (->).
One example of the benefit of such abstraction, is for example if we reference the red, green, blue values of a color type, we are calling interface methods, not matching constructor values. Thus subtypes of color may implement red,green,blue orthogonal to their constructor values.
I am fairly sleepy, I hope this was coherent.
[1] Modular Type Classes, Dreyer, Harper, Chakravarty, section 2.1 Classes are signatures, instances are modules
Copute for dummies
My secret weapon against the fascism that is descending on our globe and humorously explained.
It flys exponentially faster than a speeding bullet, they can't hear it, they can't see it, they can't even understand it.
If you had read any of this copute.com site before, you can read it again, because I edited and improved nearly every section, even as recently as today.
I also suggested you watch to this talk by the creator of Scala on the big change to parallelism that is occurring because the computer clock speed can't increase any more, only the # of cores.
Folks here is the result of several weeks of research and writing, and I have now an layman's introduction of Copute and the explanation of why and how it could change the world radically:
http://copute.com/
You may read the following sections, which are non-technical. Don't read the sub-sections unless they are explicitly listed below (because you won't understand them).
Copute (opening section)
| Higher-Level Paradigms Chart
Achieving Reed's Law
What Would I Gain?
| Improved Open Source Economic Model.
| | Open Source Lowers Cost, Increases Value
| | Project-level of Code Reuse Limits Sharing, Cost Savings, and Value
| | Module-level of Reuse Will Transition the Reputation to an Exchange Economy
| | Copyright Codifies the Economic Model
Skeptical?
| Purity
| | Benefits
| | Real World Programs
| Higher-Level
| | Low-Level Automation
| | Degrees-of-Freedom
| | | Physics of Work
| | | | Fitness
| | | | Efficiency of Work
| | | | Knowledge
| State-of-the-Art Languages
| | Haskell 98
| | Scala
| | ML Languages
If you want some independent affirmation of my ideas, see the comments near this end of this expert programmers blog page:
http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323
Feedback is welcome.
P.S. If you want to see what I mean about eliminating exceptions in Copute, load http://www.simplyscala.com/, then enter following line of code and click the "Evaluate" button and watch the red errors fly all over (then pat yourself on the back, you wrote your first program):
List().tail
=======================
=========ADD===========
=======================
http://Copute.com
If you are using FF or Chrome, make sure you use the horizontal scroll bar to view the content to the right.
I would like to have the columns scroll vertically, with page breaks for the height of the screen, CSS3 multicol does not provide that option. I may experiment with trying to force paged media on a screen browser later, in order to get vertical instead horizontal scrolling.
The document is about how to help people cooperate faster on the internet, using a new computer language, that will lead to much more freedom.
TIP: Do you see that western governments are trying to overthrow the governments of the countries that have oil in the Middle East and northern Africa, and they are giving money to the radical Muslim Brotherhood, because they want to cause fighting and disorganization so the oil will be shut off for some years. The reason they want to do this, is they want to make the prices of everything go very high to bankrupt all the people in the world, so that they can make us their slaves. Of course, they say this is for democracy and the freedom of the people in those countries. How stupid people are to believe them.
I suggest you read the section "Physics of Work" on my site. Then you will understand why a centralized government is always evil and going towards failure. It is due to the laws of physics. Maybe you can understand if you read that section very carefully.
http://Copute.com
Skeptical?
| Higher-Level
| | Degrees-of-Freedom
| | | Physics of Work
| | | | Fitness
| | | | Efficiency of Work
| | | | Knowledge
It flys exponentially faster than a speeding bullet, they can't hear it, they can't see it, they can't even understand it.
If you had read any of this copute.com site before, you can read it again, because I edited and improved nearly every section, even as recently as today.
I also suggested you watch to this talk by the creator of Scala on the big change to parallelism that is occurring because the computer clock speed can't increase any more, only the # of cores.
Folks here is the result of several weeks of research and writing, and I have now an layman's introduction of Copute and the explanation of why and how it could change the world radically:
http://copute.com/
You may read the following sections, which are non-technical. Don't read the sub-sections unless they are explicitly listed below (because you won't understand them).
Copute (opening section)
| Higher-Level Paradigms Chart
Achieving Reed's Law
What Would I Gain?
| Improved Open Source Economic Model.
| | Open Source Lowers Cost, Increases Value
| | Project-level of Code Reuse Limits Sharing, Cost Savings, and Value
| | Module-level of Reuse Will Transition the Reputation to an Exchange Economy
| | Copyright Codifies the Economic Model
Skeptical?
| Purity
| | Benefits
| | Real World Programs
| Higher-Level
| | Low-Level Automation
| | Degrees-of-Freedom
| | | Physics of Work
| | | | Fitness
| | | | Efficiency of Work
| | | | Knowledge
| State-of-the-Art Languages
| | Haskell 98
| | Scala
| | ML Languages
If you want some independent affirmation of my ideas, see the comments near this end of this expert programmers blog page:
http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323
Feedback is welcome.
P.S. If you want to see what I mean about eliminating exceptions in Copute, load http://www.simplyscala.com/, then enter following line of code and click the "Evaluate" button and watch the red errors fly all over (then pat yourself on the back, you wrote your first program):
List().tail
=======================
=========ADD===========
=======================
http://Copute.com
If you are using FF or Chrome, make sure you use the horizontal scroll bar to view the content to the right.
I would like to have the columns scroll vertically, with page breaks for the height of the screen, CSS3 multicol does not provide that option. I may experiment with trying to force paged media on a screen browser later, in order to get vertical instead horizontal scrolling.
The document is about how to help people cooperate faster on the internet, using a new computer language, that will lead to much more freedom.
TIP: Do you see that western governments are trying to overthrow the governments of the countries that have oil in the Middle East and northern Africa, and they are giving money to the radical Muslim Brotherhood, because they want to cause fighting and disorganization so the oil will be shut off for some years. The reason they want to do this, is they want to make the prices of everything go very high to bankrupt all the people in the world, so that they can make us their slaves. Of course, they say this is for democracy and the freedom of the people in those countries. How stupid people are to believe them.
I suggest you read the section "Physics of Work" on my site. Then you will understand why a centralized government is always evil and going towards failure. It is due to the laws of physics. Maybe you can understand if you read that section very carefully.
http://Copute.com
Skeptical?
| Higher-Level
| | Degrees-of-Freedom
| | | Physics of Work
| | | | Fitness
| | | | Efficiency of Work
| | | | Knowledge
Social cloud is how we will filter information more efficiently
You all know about filtering of information, because you come to this forum to read a lot of the information from the web condensed and filtered for you by people you think have a common interest and outlook (or at least related outlook worth reading).
Read this article for a laymen's explanation:
http://esr.ibiblio.org/?p=3614&cpage=1#comment-318814
Shelby wrote:
Here were my prior posts on this matter:
Tension between CR and CTR for advertising!
Simultaneous maximization of both CR and CTR is Google's Achilles heel
=============================
http://esr.ibiblio.org/?p=3614&cpage=2#comment-319354
===================================
http://esr.ibiblio.org/?p=3634&cpage=1#comment-319373
Read this article for a laymen's explanation:
http://esr.ibiblio.org/?p=3614&cpage=1#comment-318814
Shelby wrote:
You've identified my model for the simultaneous optimization of CTR (ratio of clicks to views of ads) and CR (conversion ratio of visitors to sales).
This should once sufficiently ubiquitous, make it impossible for any entity (e.g. a blog owner) to censor or centralize the control over comments, because the comments (CR refinement) will continue into the social cloud.
Does this present a conflict of interest for Google, because if the social cloud is truly open, then how does it not eventually reduce the leverage to charge rents on advertising matching? Or does it increase the volume of ad spending (higher ROI for advertisers) and reward the business model with economy-of-scale for the cloud servers? Why wouldn't something like Amazon's commodity server model win over Google's smart servers model?
As information becomes more tailored to smaller groups, then P2P storage becomes less viable (unless we want to pay a huge bandwidth premium to store data on peers that don't use it), because there isn't enough redundancy of storage in your peer group, so it appears server farms are not going away. The bifurcating tree network topology is thermodynamically more efficient than the fully connected mesh (e.g. the brain is not a fully connected mesh, each residential water pipe doesn't connect individually to the main pumping station, etc).
P.S. I have become less concerned about the vulnerability (especially to fascist govt) of centralized storage, because as the refinement of data becomes more free market (individualized), it becomes more complex when viewed as a whole, thus the attackers will find it more challenging to impact parts without impacting themselves. Thus it appears to me the commodity server model wins, i.e. less intelligence in the server farm, and more in the virtual cloud.
Here were my prior posts on this matter:
Tension between CR and CTR for advertising!
Simultaneous maximization of both CR and CTR is Google's Achilles heel
=============================
http://esr.ibiblio.org/?p=3614&cpage=2#comment-319354
Isn't it very simple? Most of the comments have agreed, and I concur. Nature made whole foods slower to digest (e.g. fiber, not finely ground to infinite surface area, etc) so we don't spike our insulin, plaque our liver, etc.. Whole foods satisfy appetite because they contain the complex micro-nutrients our body craves (e.g. amino acids, etc), which processed foods do not. Complex carbs, sugars, fats should not be limited, because our body needs and will regulate these (even the ratios between food groups) signaled by our cravings. To change body form, increase exercise, don't limit diet. Processed carbs, sugars, and fats should be entirely avoided, as these screw up the feedback mechanism and probably cause the confusion referred to. The appetite feedback loop is out-of-whack when not consuming whole foods, and probably also when not exercising sufficiently. Except for outlier adverse genetics, no one eating only whole foods and exercising, needs to count calories and grams.
@Tom if the government wasn't taxing us for the healthcare of those who smoke, and those who breathe their smoke in public venues, then we would be impacted less by their smoking. Probably there would be less smokers if government wasn't subsidizing their health care and lower job performance, so then the nuisance for us non-smokers would also be mitigated.
@gottlob Isn't social media a revolution in information targeting, i.e. fitness, which is orthogonal to the "quality of demand" to which you refer?
Tangentially, I also think that knowledge will soon become fungible money (and I don't mean anything like BitCoin, of which I am highly critical), in the form of compositional programming modules, which will change the open source model from esr's gift to an exchange economy. Remember from esr's recent blog, my comment was that software engineering is unique in that it is used by all the others, and it is never static, and thus is a reasonable proxy for (fundamental of) broad based knowledge. I suggest a broader theory, that the industrial age is dying, which is why we see the potential for billions unemployed. But the software age is coming up fast to the rescue. Open source is a key step, but I don't think the gift economy contains enough relative market value information to scale it to billions of jobs.
@Ken doesn't genetics matter at the extremes of desired outcome or adverse genetics, where whole foods and reasonable level of physical activity is sufficient for most, e.g. some fat is desirable and necessary normally?
===================================
http://esr.ibiblio.org/?p=3634&cpage=1#comment-319373
Perhaps "exeunt" because by implication, it is the company exited the stage.
I am saddened to read that Jobs was back in hospital on June 29, that he won't be around to contribute and observe the ultimate outcome of the software age and open source. Contrasted with that I want to eliminate his captive market for collecting 30% rents on software development and dictating morality of apps. The competitive OPEN (not captive) balance is for investment capital is to extract about 10% of the increase in capital.
Of course Apple will lose market share in not too distant future, as will every capital order that exists due to more than 10% capture of the increase resulting from its investment. Btw, this is why a gold standard can never work, because society can't progress when new innovation and labor (i.e. capital) is enslaved to pre-existing capital (prior innovation and labor).
In my mind, the more interesting question is what happens to Google's ad renting captive market (I suspect taking way more than 10% of the increase in many cases), when the revolution of information targeting fitness of OPEN (i.e. not captive) social media has the same explosion into entropy that Android did to Apple. The waterfall exponential shift won't take more than 2 years once it begins. I suppose Google will be forced to lower their take (as Android will force Apple), thus exponentially increasing the size of the ad market is critical. So the motivation for Android is clear, but ironically it may accelerate Google's transformation from an ad company to a software company. But as a software company, I expect Google will be much more valuable as a collection of individuals or smaller groups, thus there will be an exodus of talent. I don't yet know how many years forward I am looking, probably not even a decade.
Last edited by Shelby on Thu Aug 25, 2011 10:00 am; edited 2 times in total
How to make knowledge fungible, i.e. a currency
UPDATE: a new post on Entropic Efficiency summarizes perhaps better than this post does.
Due to technical advances in code reuse, Copute, seems to have the potential to become the next mainstream programming language, regardless of whether I can successfully design a new economic model for open source, i.e. exchange (capitalism) instead of gift (socialism).
Yet I am very curious about how knowledge could become a fungible currency.
One of the key insights was that software development is the only engineering discipline that is used by all the others. Thus software development is a fundamental expression of knowledge. So if there was some way to make software development fungible, then by proxy, knowledge would be also.
The idea had been that if there was some way people could create reusable software modules, then these could be exchanged as building blocks of larger software modules and applications. In theory, Copute is designed to make the modules reusable, so the remaining two challenges for a capitalistic exchange was:
1. how to define a fungible unit of the knowledge currency
2. how to convert this unit to general economy
The unit of a knowledge currency would be, due the proposed proxy for knowledge, a unit of programming code. But the challenge was how to define such a unit, that reflects a free market value and yet remains fungible. First of all, there is no known standard measure of code value, e.g. lines of code (LOC) appears to not be well correlated with code complexity nor market value. Note that every supply/demand curve has price on one of its axis, and quantity on the other axis. Thus if there were competing programming modules and their price was equivalent, then the relative quantity of demand would determine which had more market value. Note that price is useful when it contains information about relative market value, but if the software development process needs to incorporate many modules, and modules which use other modules, price contains no relative information of market value because the developer can't possibly factor all of these economic relationships and modules wouldn't be able to reliably reuse other modules if the reused module prices could change. So the first key conclusion is that the unit of the programming code price, must be standardized based upon some metric which is reasonably correlated to effort, and then let relative market value be measured by relative quantity of demand for competing modules.
When designing the Copute grammar, I realized that everything is a function with one input parameter. So thus a non-ambiguous measure of complexity, is the number of such single-parameter functions in a module. And thus the single-parameter function is the basic unit of programming code, and thus by proxy the unit of knowledge. Thus the relative complexity (and thus relative price) of code can be automatically determined at compile-time.
But how to convert this unit to a price in the general economy, and such that incorrect automated relative price of modules would not skew the relative demand, e.g. code that could fetch a much higher price and still gain the same quantity of relative demand? The solution is of course competition. If a module owner can get a better (quantity x price) in another marketplace, they will. Additionally, the proposed marketplace will be non-exclusive, because the conversion of this knowledge unit to price in the general economy will not be based on the relative choice of modules. In other words, the consumer of these units will not pay per unit, but per a metric of the total value added.
Notice that most programming languages and libraries are given away for free. This is the gift economy, i.e. I scratch your back if you scratch mine and let's not keep very specific accounting of it. Thus the improvement should have the same level of efficiency, while capturing more market value information. So the efficiency is basically that it makes no sense to pay out a large percentage of a software development's cost (or potential profit) to reuse existing work, because otherwise new innovation and labor becomes a slave to capital (old innovation and labor). Thus the marketplace of modules offered for reuse, should only extract for example the greater of 1% of the gross sales, or 10% of the development cost, for the software project reusing any quantity of the modules offered. This 10% comes from the Bible's wisdom to take only 10% of the increase. The 1% of gross assumes that a company should pay at least 10% of gross sales in (programming, i.e. knowledge) development costs, and our model asks for 10% of the 10%. There should be some threshold, so the vast majority of individual developers never pay.
So the offer to the software developer is, you can reuse as many modules as you want from our marketplace, and you pay us the greater of 1% of your gross sales, or 10% of your development cost, of the software product developed. This income is then distributed to the module owners, using the relative market value of (single-parameter function units x quantity of module reuse).
There is one remaining problem-- bug fixes and module improvements by 3rd parties. How do we motivate 3rd party bug fixes and module improvements, without penalizing the module owner? The module owner wants bug fixes and improvements, but doesn't want to pay more for them than they are worth. It seems the best is to let the module owner set a bounty on specific items in the bug and feature request database, and that bounty can be a percentage of the module's income.
This seems very workable and sound. I am hopeful that this makes knowledge into a fungible currency, with radical implications for the world.
=================
Additional Explanation
=================
Read the entire thread that is and precedes the following linked post:
https://goldwetrust.forumotion.com/t44p75-what-is-money#4535
==============================
http://esr.ibiblio.org/?p=3634&cpage=4#comment-320342
================
UPDATE: It is very important the mininum level of royalty-free reuse of code modules have at least the following 3 qualities:
The reason is because if for example, copute.com, gained a natural monopoly due to being the first-mover and gaining inertia due to greatest economy-of-scale of modules, then it would be possible for a passive capital to gain control over copute.com and change the terms of the company to extract unreasonable (parasitic) rents from the entire community, thus we are just back to a fiat like system, where the capitalists control the knowledge workers' capital. So since those paying market-priced royalties in Copute, will be the larger corporations, if the govt decides to heavily tax it, they will tax their own passive capitalists. See the design of Copute is to tax the frictional transaction costs in the Theory of the Firm that gives rise to the corporation. Thus the design of Copute's economic model is for its value to decrease in legal tender terms as time goes on, while its value in knowledge terms increases. This is a very unique economic design, that I don't think has existed in any business model I am aware of.
Having these assurances will encourage more contribution to copute.com's repository. I would encourage competing repositories, and for them to adopt a similar license strategy.
Due to technical advances in code reuse, Copute, seems to have the potential to become the next mainstream programming language, regardless of whether I can successfully design a new economic model for open source, i.e. exchange (capitalism) instead of gift (socialism).
Yet I am very curious about how knowledge could become a fungible currency.
One of the key insights was that software development is the only engineering discipline that is used by all the others. Thus software development is a fundamental expression of knowledge. So if there was some way to make software development fungible, then by proxy, knowledge would be also.
The idea had been that if there was some way people could create reusable software modules, then these could be exchanged as building blocks of larger software modules and applications. In theory, Copute is designed to make the modules reusable, so the remaining two challenges for a capitalistic exchange was:
1. how to define a fungible unit of the knowledge currency
2. how to convert this unit to general economy
The unit of a knowledge currency would be, due the proposed proxy for knowledge, a unit of programming code. But the challenge was how to define such a unit, that reflects a free market value and yet remains fungible. First of all, there is no known standard measure of code value, e.g. lines of code (LOC) appears to not be well correlated with code complexity nor market value. Note that every supply/demand curve has price on one of its axis, and quantity on the other axis. Thus if there were competing programming modules and their price was equivalent, then the relative quantity of demand would determine which had more market value. Note that price is useful when it contains information about relative market value, but if the software development process needs to incorporate many modules, and modules which use other modules, price contains no relative information of market value because the developer can't possibly factor all of these economic relationships and modules wouldn't be able to reliably reuse other modules if the reused module prices could change. So the first key conclusion is that the unit of the programming code price, must be standardized based upon some metric which is reasonably correlated to effort, and then let relative market value be measured by relative quantity of demand for competing modules.
When designing the Copute grammar, I realized that everything is a function with one input parameter. So thus a non-ambiguous measure of complexity, is the number of such single-parameter functions in a module. And thus the single-parameter function is the basic unit of programming code, and thus by proxy the unit of knowledge. Thus the relative complexity (and thus relative price) of code can be automatically determined at compile-time.
But how to convert this unit to a price in the general economy, and such that incorrect automated relative price of modules would not skew the relative demand, e.g. code that could fetch a much higher price and still gain the same quantity of relative demand? The solution is of course competition. If a module owner can get a better (quantity x price) in another marketplace, they will. Additionally, the proposed marketplace will be non-exclusive, because the conversion of this knowledge unit to price in the general economy will not be based on the relative choice of modules. In other words, the consumer of these units will not pay per unit, but per a metric of the total value added.
Notice that most programming languages and libraries are given away for free. This is the gift economy, i.e. I scratch your back if you scratch mine and let's not keep very specific accounting of it. Thus the improvement should have the same level of efficiency, while capturing more market value information. So the efficiency is basically that it makes no sense to pay out a large percentage of a software development's cost (or potential profit) to reuse existing work, because otherwise new innovation and labor becomes a slave to capital (old innovation and labor). Thus the marketplace of modules offered for reuse, should only extract for example the greater of 1% of the gross sales, or 10% of the development cost, for the software project reusing any quantity of the modules offered. This 10% comes from the Bible's wisdom to take only 10% of the increase. The 1% of gross assumes that a company should pay at least 10% of gross sales in (programming, i.e. knowledge) development costs, and our model asks for 10% of the 10%. There should be some threshold, so the vast majority of individual developers never pay.
So the offer to the software developer is, you can reuse as many modules as you want from our marketplace, and you pay us the greater of 1% of your gross sales, or 10% of your development cost, of the software product developed. This income is then distributed to the module owners, using the relative market value of (single-parameter function units x quantity of module reuse).
There is one remaining problem-- bug fixes and module improvements by 3rd parties. How do we motivate 3rd party bug fixes and module improvements, without penalizing the module owner? The module owner wants bug fixes and improvements, but doesn't want to pay more for them than they are worth. It seems the best is to let the module owner set a bounty on specific items in the bug and feature request database, and that bounty can be a percentage of the module's income.
This seems very workable and sound. I am hopeful that this makes knowledge into a fungible currency, with radical implications for the world.
=================
Additional Explanation
=================
Read the entire thread that is and precedes the following linked post:
https://goldwetrust.forumotion.com/t44p75-what-is-money#4535
==============================
http://esr.ibiblio.org/?p=3634&cpage=4#comment-320342
@Jeff Read
That research correlated SLOC with fault-proneness complexity. It is possible (I expect likely) that given the same complexity of application, bloated code may have more faults than dense code.
A metric that correlates with application complexity (as a proxy of relative price in, price x market quantity = value), is a requirement in the exchange model for open source that I am proposing. I will probably investigate a metric of lambda terms count and denotational semantics (i.e. userland type system) complexity.
The point is that a metric for fault-proneness complexity may not be correlated with application complexity, i.e. effort and difficulty of the problem solved.
================
UPDATE: It is very important the mininum level of royalty-free reuse of code modules have at least the following 3 qualities:
- Legal stipulation that the royalty-free reuse of code modules (from copute.com's repository only) must apply at least minimally to the ability of an individual to support himself with a small company. So the limits should be minimally roughly the development cost where 1 full-time programmer was employed (for the royalty on the development cost option), or the size of the market necessary to support the livelihood of a individual and his family. And the limit should be the greater of the two.
- Legal stipulation in the software license for all the current AND FUTURE modules, that the limits in #1 above, may never be decreased (although they could be increased).
- Legal stipulation that in the event that during any duration for which a court in any jurisdiction removes the force of these terms, that the license of the software module grants the people (but not corporations) in that jurisdiction the royalty-free license to all the modules on copute.com's repository, without any limitation. Then the courts will have every software module owner up-in-arms against them.
The reason is because if for example, copute.com, gained a natural monopoly due to being the first-mover and gaining inertia due to greatest economy-of-scale of modules, then it would be possible for a passive capital to gain control over copute.com and change the terms of the company to extract unreasonable (parasitic) rents from the entire community, thus we are just back to a fiat like system, where the capitalists control the knowledge workers' capital. So since those paying market-priced royalties in Copute, will be the larger corporations, if the govt decides to heavily tax it, they will tax their own passive capitalists. See the design of Copute is to tax the frictional transaction costs in the Theory of the Firm that gives rise to the corporation. Thus the design of Copute's economic model is for its value to decrease in legal tender terms as time goes on, while its value in knowledge terms increases. This is a very unique economic design, that I don't think has existed in any business model I am aware of.
Having these assurances will encourage more contribution to copute.com's repository. I would encourage competing repositories, and for them to adopt a similar license strategy.
Last edited by Shelby on Sun Sep 11, 2011 4:38 pm; edited 7 times in total
Creator of the browser, says software will now take over the world
Read my prior post about software being in everything and thus is a proxy for knowledge, then read this:
http://online.wsj.com/article/SB10001424053111903480904576512250915629460.html (make sure you watch the video!)
Here is an excerpt:
http://online.wsj.com/article/SB10001424053111903480904576512250915629460.html (make sure you watch the video!)
Here is an excerpt:
My own theory is that we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy.
More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.
Why is this happening now?
Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.
Over two billion people now use the broadband Internet, up from perhaps 50 million a decade ago, when I was at Netscape, the company I co-founded. In the next 10 years, I expect at least five billion people worldwide to own smartphones, giving every individual with such a phone instant access to the full power of the Internet, every moment of every day.
On the back end, software programming tools and Internet-based services make it easy to launch new global software-powered start-ups in many industries—without the need to invest in new infrastructure and train new employees. In 2000, when my partner Ben Horowitz was CEO of the first cloud computing company, Loudcloud, the cost of a customer running a basic Internet application was approximately $150,000 a month. Running that same application today in Amazon's cloud costs about $1,500 a month.
With lower start-up costs and a vastly expanded market for online services, the result is a global economy that for the first time will be fully digitally wired—the dream of every cyber-visionary of the early 1990s, finally delivered, a full generation later.
Perhaps the single most dramatic example of this phenomenon of software eating a traditional business is the suicide of Borders and corresponding rise of Amazon. In 2001, Borders agreed to hand over its online business to Amazon under the theory that online book sales were non-strategic and unimportant.
Oops.
Today, the world's largest bookseller, Amazon, is a software company—its core capability is its amazing software engine for selling virtually everything online, no retail stores necessary. On top of that, while Borders was thrashing in the throes of impending bankruptcy, Amazon rearranged its web site to promote its Kindle digital books over physical books for the first time. Now even the books themselves are software.
Today's largest video service by number of subscribers is a software company: Netflix. How Netflix eviscerated Blockbuster is an old story, but now other traditional entertainment providers are facing the same threat. Comcast, Time Warner and others are responding by transforming themselves into software companies with efforts such as TV Everywhere, which liberates content from the physical cable and connects it to smartphones and tablets.
Today's dominant music companies are software companies, too: Apple's iTunes, Spotify and Pandora. Traditional record labels increasingly exist only to provide those software companies with content. Industry revenue from digital channels totaled $4.6 billion in 2010, growing to 29% of total revenue from 2% in 2004.
Today's fastest growing entertainment companies are videogame makers—again, software...
I'm becoming skeptical about the claim that pure functional is generally log n slower
http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/7194832#7194832
Shelby Moore III wrote:
Shelby Moore III wrote:
I am becoming skeptical about [the claim][1] that pure functional is generally O(log n). See also Edward Kmett's answer which makes that claim. Although that may apply to random mutable array access in the theoretical sense, but random mutable array access is probably not what most any algorithm requires, when it is properly studied for repeatable structure, i.e. not random. I think Edward Kmett refers to this when he writes, "exploit locality of updates".
I am thinking O(1) is theoretically possible in a pure functional version of the n-queens algorithm, by adding an undo method for the DiffArray, which requests a look back in differences to remove duplicates and avoid replaying them.
If I am correct in my understanding of the way the backtracking n-queens algorithm operates, then the slowdown caused by the DiffArray is because the unnecessary differences are being retained.
In the abstract, a "DiffArray" (not necessarily Haskell's) has (or could have) a set element method which returns a new copy of the array and stores a difference record with the original copy, including a pointer to the new changed copy. When the original copy needs to access an element, then this list of differences has to be replayed in reverse to undo the changes on a copy of the current copy. Note there is even the overhead that this single-linked list has to be walked to the end, before it can be replayed.
Imagine instead these were stored as a double-linked list, and there was an undo operation as follows.
From an abstract conceptual level, what the backtracking n-queens algorithm does is recursively operate on some arrays of booleans, moving the queen's position incrementally forward in those arrays on each recursive level. See [this animation][2].
Working this out in my head only, I visualize that the reason DiffArray is so slow, is because when the queen is moved from one position to another, then the boolean flag for the original position is set back to false and the new position is set to true, and these differences are recorded, yet they are unnecessary because when replayed in reverse, the array ends up with the same values it has before the replay began. Thus instead of using a set operation to set back to false, what is needed is an undo method call, optionally with an input parameter telling DiffArray what "undo to" value to search for in the aforementioned double-linked list of differences. If that "undo to" value is found in a difference record in the double-linked list, there are no conflicting intermediate changes on that same array element found when walking back in the list search, and the current value equals the "undo from" value in that difference record, then the record can be removed and that old copy can be re-pointed to the next record in the double-linked list.
What this accomplishes is to remove the unnecessary copying of the entire array on backtracking. There is still some extra overhead as compared to the imperative version of the algorithm, for adding and undoing the add of difference records, but this can be nearer to constant time, i.e. O(1).
If I correctly understand the n-queen algorithm, the lookback for the undo operation is only one, so there is no walk. Thus it isn't even necessary to store the difference of the set element when moving the queen position, since it will be undone before the old copy will be accessed. We just need a way to express this type safely, which is easy enough to do, but I will leave it as an exercise for the reader, as this post is too long already.
[1]: https://goldwetrust.forumotion.com/t112p180-computers#4437
[2]: http://en.wikipedia.org/w/index.php?title=Eight_queens_puzzle&oldid=444294337#An_animated_version_of_the_recursive_solution
The backtracking n-queen algorithm is a recursive function that takes 3 parameters, an array for each diagonals direction (\ and /), and a row count. It iterates over columns on that row, moving the queen position on that row in the arrays, and recursing itself on each column position with cur_row + 1. So seems to me the movement of the queen position in the arrays is undoable as I have described in my answer. Does seem too easy doesn't it. So someone please tell me why, or I will find out when I write out an implementation in code.
Page 8 of 11 • 1, 2, 3 ... 7, 8, 9, 10, 11
Page 8 of 11
Permissions in this forum:
You cannot reply to topics in this forum