Computers:
5 posters
Page 8 of 11
Page 8 of 11 • 1, 2, 3 ... 7, 8, 9, 10, 11
simplifying the Copute grammar
In the attached file, I am overhauling the LL(k) SLK parser grammar that originally appeared here:
http://copute.com/dev/docs/Copute/ref/grammar.txt
http://members.cox.net/slkpg/
Note the attached incomplete grammar is already LL(5), i.e. 5 tokens of lookahead, while the prior grammar was LL(3).
I have significantly increased the documentation in the grammar.
One of the first goals has been to eliminate the sort of multiplicitous syntax readability problem with Scala:
https://goldwetrust.forumotion.com/t112p165-computers#4420
So now in Copute, there is only one way to declare a function which is an expression that can be used where ever a function of the correct type is expected:
And only way way to declare a reference to a function (i.e. for a method), and the type of the function is always declared by what the function is assigned to (imagine also passing the function as an argument to another function), e.g.:
When declared within the body of an interface, mixin, or class, the above is a method (i.e. a function that inputs 'this' in an undeclared first argument). To declare as function that does not input 'this', then the read+write properties should be declared (note constructor arguments are always (public,const), but can be obscured by assigning to a property with same identifier), e.g.:
A method can be declared private:
So thus the interface to a method is just the first part:
And the inherited implementation in class or mixin does not need to restate the function type:
Note the optional default arguments and lazy evaluation declaration is specified where the function is, but the function interface declaration can be separated from the function body, thus in the interface:
And for the implementation in the mixin or class is not required to repeat it:
My other goals are to make type inference more local and consistent enforcement of where type should/must be specified or not. For example, type can not be specified in the function itself, which I think makes reading a function too dense any way:
\s : Int, pos : Int -> s.substring( pos ).length \\ syntax error
For the default case. it is no longer necessary to prepend with 'var' or 'cref', as it is in Scala (e.g. 'def'), so a lower verbosity syntax.
I am also trying to convert some statements (e.g. switch) into expressions to make the syntax more bijective (compatible in both directions) with Scala, since everything is an expression in Scala and there are no statements.
> That's good. I've long been a fan of "one and only one way to do things".
> In fact, I've suggested Scala delete methods and just require methods be
> defined as "val mult = (x: Int, y: Int) => x * y".
Your proposal without the return type inferred would be:
Copute method is declared:
I prefer the type of the instance reference 'mult' to be unconflated from the rvalue declaration. It is crowded to read the arguments (x and y) with the type noise, especially when you add call-by-name (lazy evaluation) and default values annotations to arguments. Also thus non-function types are unified with function types in terms of where the type declaration appears in an instance reference declaration. I prefer -> instead of Scala's => for the function constructor, because the latter looks too much like say +=, and it often appears close to an assignment =, thus making it look like = ... = ..., and in general I don't like overloading the meaning of '=' to mean something other than assignment. Also -> is whitespace friendly and is consistent with Haskell and other languages and I think also some lamba calculus notations.
Copute non-method member which is a function type is differentiated from a method by the explicit declaration of its getter and setter permissions:
http://copute.com/dev/docs/Copute/ref/grammar.txt
http://members.cox.net/slkpg/
Note the attached incomplete grammar is already LL(5), i.e. 5 tokens of lookahead, while the prior grammar was LL(3).
I have significantly increased the documentation in the grammar.
One of the first goals has been to eliminate the sort of multiplicitous syntax readability problem with Scala:
https://goldwetrust.forumotion.com/t112p165-computers#4420
So now in Copute, there is only one way to declare a function which is an expression that can be used where ever a function of the correct type is expected:
- Code:
\s -> s.length
And only way way to declare a reference to a function (i.e. for a method), and the type of the function is always declared by what the function is assigned to (imagine also passing the function as an argument to another function), e.g.:
- Code:
len : Int -> Int = \s -> s.length
When declared within the body of an interface, mixin, or class, the above is a method (i.e. a function that inputs 'this' in an undeclared first argument). To declare as function that does not input 'this', then the read+write properties should be declared (note constructor arguments are always (public,const), but can be obscured by assigning to a property with same identifier), e.g.:
- Code:
len (public,public) : Int -> Int = \s -> s.length
A method can be declared private:
- Code:
private len : Int -> Int = \s -> s.length // as in Scala, private may be qualified with a package name, e.g. private[name]
So thus the interface to a method is just the first part:
- Code:
len : Int -> Int
And the inherited implementation in class or mixin does not need to restate the function type:
- Code:
len = \s -> s.length
Note the optional default arguments and lazy evaluation declaration is specified where the function is, but the function interface declaration can be separated from the function body, thus in the interface:
- Code:
len : Int -> Int -> Int = \s, &pos = 0
And for the implementation in the mixin or class is not required to repeat it:
- Code:
len = \s, pos -> s.substring( pos ).length
My other goals are to make type inference more local and consistent enforcement of where type should/must be specified or not. For example, type can not be specified in the function itself, which I think makes reading a function too dense any way:
\s : Int, pos : Int -> s.substring( pos ).length \\ syntax error
For the default case. it is no longer necessary to prepend with 'var' or 'cref', as it is in Scala (e.g. 'def'), so a lower verbosity syntax.
I am also trying to convert some statements (e.g. switch) into expressions to make the syntax more bijective (compatible in both directions) with Scala, since everything is an expression in Scala and there are no statements.
> That's good. I've long been a fan of "one and only one way to do things".
> In fact, I've suggested Scala delete methods and just require methods be
> defined as "val mult = (x: Int, y: Int) => x * y".
Your proposal without the return type inferred would be:
- Code:
val mult = (x: Int, y: Int): Int => x * y
Copute method is declared:
- Code:
mult : Int -> Int -> Int = \x, y -> x * y
I prefer the type of the instance reference 'mult' to be unconflated from the rvalue declaration. It is crowded to read the arguments (x and y) with the type noise, especially when you add call-by-name (lazy evaluation) and default values annotations to arguments. Also thus non-function types are unified with function types in terms of where the type declaration appears in an instance reference declaration. I prefer -> instead of Scala's => for the function constructor, because the latter looks too much like say +=, and it often appears close to an assignment =, thus making it look like = ... = ..., and in general I don't like overloading the meaning of '=' to mean something other than assignment. Also -> is whitespace friendly and is consistent with Haskell and other languages and I think also some lamba calculus notations.
Copute non-method member which is a function type is differentiated from a method by the explicit declaration of its getter and setter permissions:
- Code:
mult (public, public) : Int -> Int -> Int = \x, y -> x * y
- Code:
sources:
{ source }+
source:
class
reference
expression
ifElse // TODO: move this to 'primary' and change 'then' to use 'exprSansIfelse' instead of 'expression', and add 'else' to that line
scope:
'{' [ NL ] sources [ NL ] '}'
reference:
[ refType ] ID typeInit // without refType, defaults to 'cref' for pass-by-reference, and 'var' for pass-by-value, instances
refType ID init // init without type is only allowed if prepended with a refType, because otherwise ambiguous with with an assignment expression, e.g. no way to obscure a reference of same name in outer scope or assignment expression reference name typo silently accepted as construction of new reference
refType:
'const' // the instance can not be modified, regardless if it is pass-by-reference or pass-by-value
'ref' // the reference can be modified to point to a different instance, applies only to pass-by-reference instances
'const' 'ref' // combines the above meanings, note the reference is not constant.
'var' // 'const' 'var' is not allowed because has same meaning as 'const'
'cref' // 'const' 'cref' is not allowed because has same meaning as 'const'
typeInit:
: nfType [ [ NL ] typeInit2 ] // init is optional if cType has a default constructor
tArgList [ NL ] ':' fType [ NL ] '=' [ NL ] function
//init when refType is not specified, this is ambiguous with an assignment expression. Disambiguate from assignment when ID is not already declared. Note this means a typo on ID will silently fail assignment and create a reference that is never accessed (so maybe we can detect on compilation and error). To hide an ID of same name in outer scope, then refType must be specified. If we did not allow assignment to declare new references, then mixin inheritance of an interface would require verbose 'cref' prepended to method implementations (due to the type being already specified in the interface).
init:
'=' [ NL ] rightExpr
typeInit2:
init
nextType [ NL ] '=' [ NL ] function
nfType:
cType // include standard classes for the builtin types, Int, Float, String, Bool
fType:
nfType [ NL ] nextType
fType2
fType2:
'(' fType ')' [ NL ] nextType
'void' [ NL ] '->' [ NL ] xtype // 'void' not an instance reference type, thus can not be a class, only allowed as a single argument or return type
xtype:
nfType
'(' fType ')'
type:
nfType [ [ NL ] nextType ]
fType2
nextType:
'->' [ NL ] xtype [ [ NL ] nextType ]
'->' [ NL ] 'void' // 'void' not an instance reference type, thus can not be a class, only allowed as a single argument or return type
function:
fDecl [ NL ] fBody
fDecl:
[ purity ] '\' [ fArgs ]
fArgs:
fArg [ [ NL ] ',' [ [ NL ] fArgs ] ]
fArg:
[ '&' ] ID [ init ] // the optional '&' is for pass-by-expression, i.e. lazy evaluation, i.e. argument expression is implicitly wrapped in a function call that returns the evaluated expression. Note not allowed for return type.
purity:
'impure'
'mutable'
fBody:
'->' [ NL ] rightExpr
scope
expression:
assign
rightExpr
assign:
leftExpr [ NL ] assignop [ NLI ] rightExpr
leftExpr:
ID { [ NL ] memberRef } // only a named reference, or its class properties, may be assigned to. This simplifies our grammar significantly. Since we are targetting pure programming, we discourage the assignment to a reference which is an unsaved return value-- because in pure programming we can only return a new instance and it thus has not been saved any where, thus would only be useful for impure, side-effect programming. Such impure programming can still be done (save a return value to a reference before assigning to it). Note, a method which inputs a 'void' can be called without '()', so semantic analysis should not allow assign to return value of such, nor its class properties.
//conflicts with call below: '(' assign ')' { [ NL ] memberRef }+ // an assign has saved a reference in an ID, so we can assign to its class properties.
memberRef:
'.' [ NLI ] ID // no [ NL ] after the operator, http://code.google.com/p/copute/issues/detail?id=31
rightExpr:
boolOr [ [ NL ] '?' [ NLI ] assign [ NL ] ':' [ NLI ] assign ] // aka 'ternary' operator
boolOr:
boolAnd { [ NL ] '||' [ NLI ] boolAnd }
boolAnd:
bitOr { [ NL ] '&&' [ NLI ] bitOr }
bitOr:
bitXor { [ NL ] '|' [ NLI ] bitXor }
bitXor:
bitAnd { [ NL ] '^' [ NLI ] bitAnd }
bitAnd:
equality { [ NL ] '&' [ NLI ] equality }
equality:
compare { [ NL ] equalityop [ NLI ] compare }
compare:
extrapolate { [ NL ] compareop [ NLI ] shift }
shift:
concat { [ NL ] shiftop [ NLI ] concat }
concat:
add { [ NL ] '#' [ NLI ] add }
add:
multiply { addop [ NLI ] multiply } // see NLPLUS and NLMINUS, we can't put a NL in front of addop, because NL also separates expr, and unary contains an addop on its left-side in prefixop
multiply:
unary { [ NL ] multiplyop [ NLI ] unary }
unary:
[ prefixop ] [ postfixop ] instance [ postfixop ]
instance:
primary { [ NL ] memberRef } [ callExpr ] // Semantic analysis must determine if 'memberRef' and/or 'callExpr' is allowed on 'primary'. Also, a function call might not return an instance (i.e. 'void') for an impure function that has side-effects.
'new' [ NL ] cDecl call
primary:
ID
'(' [ NL ] expression [ NL ] ')'
'(' [ NL ] function [ NL ] ')'
switch
callExpr:
call { callSuffix }
callSuffix:
call
[ NL ] memberRef
call:
'(' [ NL ] fParams [ NL ] ')'
fParams:
expression [ [ NL ] ',' [ [ NL ] fParams ] ]
ifElse:
'if' '(' rightExpr ')' then
then:
[ NLI ] expression // use NLI instead of NL, http://code.google.com/p/copute/issues/detail?id=28
[ NL ] scope [ [ NL ] 'else' block ] // if there is an 'else', force braces on the 'if', http://code.google.com/p/copute/issues/detail?id=21#c2
block:
[ NLI ] expression // use NLI instead of NL, http://code.google.com/p/copute/issues/detail?id=28
scope
switch:
'switch' '(' [ NL ] expression [ NL ] ')' [ NL ] '{' [ NL ] { case } [ default ] '}' // We force the '(' and ')' because, http://code.google.com/p/copute/issues/detail?id=30&can=1
case:
'case' rightExpr ':' [ NL ] sourcesOrEmpty [ NL ] // allow empty case ';', so that user can do noop for specific cases without a default, to make sure all cases are coded
sourcesOrEmpty:
sources
';'
default:
'default' ':' [ NL ] sourcesOrEmpty [ NL ]
class:
genre ID [ [ NL ] tArgList ] [ NL ] cDecl // if INTERFACE, then ID must begin with "I[A-Z]", else "[A-HJ-Z][^A-Z]", http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
anonyClass:
genre [ [ NL ] funcArgs ] [ NL ] cDecl
genre:
'case'
'class'
'mixin'
'interface'
tArgList:
'<' tArgs '>'
tArgs:
[ NL ] tArg [ [ NL ] ',' [ tArgs ] ]
tArg:
[ variance ] ID [ tArgList ] [ tConstrain ] // the 'tArgList' is for a higher-kinded type parameter
variance:
'+'
'-'
tConstrain:
subTypeOf [ NL ] type [ superTypeOf [ NL ] type ]
superTypeOf [ NL ] type
subTypeOf:
':'
superTypeOf:
'<:'
cDecl:
[ inherit [ NL ] ] [ constructor [ NL ] ] cBody // semantic analysis must not allow 'constructor' for 'interface' and 'mixin'
inherit:
'inherits' cTypes
constructor:
[ 'private' [ NL ] ] fDecl
cTypes:
[ NL ] cType [ [ NL ] ',' [ cTypes ] ]
cType:
ID [ tParamList ] // ID is class
anonyClass
tParamList:
'<' types [ NL ] '>'
cBody:
'{' [ [ NL ] cMember { [ ',' ] [ NL ] cMember } ] [ NL ] '}'
cMember:
[ 'private' ] cMemberSuffix
cMemberSuffix:
cmBody
class // Note, this can be a CASE
cmBody:
[ 'static' ] ID [ etters ] [ [ NL ] typeInit ] // this can be a 'CASE ID', when 'genre' is 'interface', because no ID without 'etters' in 'interface'
'case' ID
etters:
'(' access ',' access ')' // The modifier that precedes ID must not be 'private'
access:
'public'
'new'
'private'
// Operator sets
prefixop:
addop // http://stackoverflow.com/questions/2624410/what-is-the-purpose-of-javas-unary-plus-operator
NLPLUS // http://code.google.com/p/copute/issues/detail?id=23, where this terminal will conflict with an NL or addop expected, otherwise process as PLUS...
NLMINUS // ...or MINUS
'!'
'~'
TYPEOF // can be used with type parametrization to fork functionality, although this is equivalent to overloading and/or using an case class, so I may deprecate this
postfixop:
'++'
'--'
multiplyop:
'*'
'/'
'%'
addop:
'+'
'-'
shiftop:
'<<'
'>>'
'>>>'
compareop:
// '<' // use MORE instead and flip operands, conflicts with sequence type ('tParamList')
'>'
'<='
'>='
equalityop:
'=='
'!='
=== // compare the instance data (or overloaded), same as EQUAL for pass-by-value types
!== // compare the instance data (or overloaded), same as NOTEQUAL for pass-by-value types
assignop:
'='
'&='
'|='
'^='
'*='
'\='
'%='
'+='
'-='
'<<='
'>>='
'>>>='
'#='
Last edited by Shelby on Thu Dec 01, 2011 7:52 pm; edited 13 times in total
World Without Web
I like the use of the word serendipity. Now imagine a future where Copute does not happen.
Al Gore didn't create the internet, hackers at DARPA did by lying to the government.
I have to cheer Eric S. Raymond on this one.
http://esr.ibiblio.org/?p=3331#comment-310506
http://esr.ibiblio.org/?p=3331#comment-310483
http://esr.ibiblio.org/?p=3335
http://esr.ibiblio.org/?p=3335#comment-310551
http://esr.ibiblio.org/?p=3335#comment-310626
http://esr.ibiblio.org/?p=3335#comment-310703
Al Gore didn't create the internet, hackers at DARPA did by lying to the government.
I have to cheer Eric S. Raymond on this one.
http://esr.ibiblio.org/?p=3331#comment-310506
Yes, it’s true. I was there [in 1983]. See “a roomful of hackers with an innate distrust of hierarchy and a strong allergy to system designs with single-point vulnerability”. Our distrust extended to systems vulnerable to political single-point failures as well as technical ones.
http://esr.ibiblio.org/?p=3331#comment-310483
Do you get it yet, Ms. Love? You rely on open-source software written by me and my peers for “day-to-day functionality” every day you touch the Internet or anything that talks to it. But you don’t see it, because we’ve done our job so well that our part of the infrastructure is almost invisible to you. You are free to make snotty remarks about our inability to meet user needs only because we have created for you the luxury of ignorance. And you are able to dismiss our concerns as hysteria only because more layers of ignorance lie between you and the long struggle we have waged against similar power grabs in the past
http://esr.ibiblio.org/?p=3335
World Without Web...
...I’m going to sketch an all-too-plausible alternate history in which the World Wide Web never happened.
The divergence point for this history is in 1983-1984, when the leadership of DARPA lied through its teeth to Congress about who was being allowed access to the Internet. The pretense being maintained was that only direct affiliates of government-sponsored, authorized research programs were being allowed on. DARPA knew very well this wasn’t true; they had a broader vision of where the Internet might lead, one that wouldn’t be realized for another ten years. They viewed the yeasty, chaotic culture of casual use by all manner of ‘randoms’ (unauthorized people including, at the time, me) as what would later be called a technology incubator – a vast Petri dish from which amazing serendipities would spring in due time...
http://esr.ibiblio.org/?p=3335#comment-310551
>Sounds like you like government after all.
Remember, I attribute the takeoff of the Internet to DARPA deliberately lying to its political masters. This happened precisely because DARPA’s leadership knew it was impossible within the incentives operating on governments for them to tolerate the ‘randoms’ or the results DARPA foresaw.
http://esr.ibiblio.org/?p=3335#comment-310626
>Further, like most advocates of the state who say “if there was no government who would build roads?”
If you think this what-if qualifies me as an “advocate of the state”, you’re out of your fucking mind.
I don’t know what evolutionary path would have gotten a free market in telecomms to an Internet. I do know that we didn’t get to run that experiment, because there was a government-created telecomms monopoly firmly in place. Even after the AT&T divestiture our infrastructure retained the stamp of that monopoly in ways both large and small. We are just lucky that DARPA subverted the normal centralizing tendency of government at a crucial time.
http://esr.ibiblio.org/?p=3335#comment-310703
>But I will point to the existance of the SF mailing list as evidence that, just as Eric says, the DARPA folks were lying about off-program uset of the Arpanet.
Heh. I’d bet that half the DARPA program monitors were on that list. Here, on information and belief, is how I think their reasoning went:
“This list is fun. It’s also an experiment in a novel mode of communication (most historians, including me, think it was the very first Internet mailing list). As such, it’s legitimately part of our research into the consequences of cheap, robust wide-area networking. And so is letting on all those college kids who don’t technically qualify under the program charter. But if we try to explain that at the Congressional hearings, we’re sure as shit going to get Proxmired.”
It was a reasonable fear. William Proxmire trashed a lot of immensely valuable research in the course of his political grandstanding. Once or twice he even apologized afterwards, but the damage had been done. I’ve had it pretty strongly hinted to me that certain of those responsible believed lying to Congress was less risky than telling the truth where Proxmire could get wind of it.
I suspect the Internet program managers were neither the first nor the last to make that choice. And be justified in making it.
Last edited by Shelby on Mon Jun 20, 2011 7:01 am; edited 2 times in total
Ahhh, State monad and Traversable come clear in my mind and my explanation
Finally I have a very good understanding and explanation of State monad, Traversable, and everything I have been working on this file.
Also, I have generalized the Traversable over the state-of-the-art in published research.
I am very satisfied with my explanation and code for the State monad, as well as the examples I gave for using it with ITraversable. Try and try and I doubt you will find any equivalently concise layman example implementations of State monad that are not in the obtuse Haskell syntax. Finally I can say that a common person can fully understand the State monad (and Traversable, Applicative, etc) without pulling their hair out for days and weeks (as I and many others have done).
Also I have now a sentence that begins with "Conceptually," at the start of the documentation for each of these (IFunctor, IApplicative, IMonad, State, ITraversable), which fully explains the significance of each one. Why should I care or use this? I have the answers succinctly now.
It is also illuminating that the Copute syntax is darn slick, concise, and not confusing.
I am ecstatic with this accomplishment and milestone.
Note this file is still a little bit incomplete (has some incorrect code in some of the mixin implementations).
I have attached my latest draft of the Copute grammar which is incompete and driving towards improving on what is at the Copute.com website, as discussed here:
https://goldwetrust.forumotion.com/t112p165-computers#4426
Also, I have generalized the Traversable over the state-of-the-art in published research.
I am very satisfied with my explanation and code for the State monad, as well as the examples I gave for using it with ITraversable. Try and try and I doubt you will find any equivalently concise layman example implementations of State monad that are not in the obtuse Haskell syntax. Finally I can say that a common person can fully understand the State monad (and Traversable, Applicative, etc) without pulling their hair out for days and weeks (as I and many others have done).
Also I have now a sentence that begins with "Conceptually," at the start of the documentation for each of these (IFunctor, IApplicative, IMonad, State, ITraversable), which fully explains the significance of each one. Why should I care or use this? I have the answers succinctly now.
It is also illuminating that the Copute syntax is darn slick, concise, and not confusing.
I am ecstatic with this accomplishment and milestone.
Note this file is still a little bit incomplete (has some incorrect code in some of the mixin implementations).
I have attached my latest draft of the Copute grammar which is incompete and driving towards improving on what is at the Copute.com website, as discussed here:
https://goldwetrust.forumotion.com/t112p165-computers#4426
- Code:
interface IFunctor< Sub<T> : IFunctor<Sub,T>, T+ >
{
map<A> : (T -> A) -> Sub<A>
}
// Functional programming (FP) paradigms are fundamentally about not repeating oneself.
// Functor enables the reuse of all functions of type, T -> A, to which we give the name "morphism functions".
//
// A functor has a 'map' method, which enables all morphism functions of type, T -> A,
// to operate on functors of type, Sub<T>, producing a result of type, Sub<A>.
// Conceptually, a functor maps all functions of type, T -> A, to functions of type, Sub<T> -> Sub<A>
//
// This enables the reuse of these morphism functions, without requiring specialized versions of type, Sub<T> -> Sub<A>.
// For example, if List<T> is functor, then a function of type, Int -> String, can convert a List<Int> into a List<String>.
// There is no need to code a specialized morphism function of type, List<Int> -> List<String>.
//
// A functor has some computation, data structure, or other form of state wrapped around the instance(s) of any type T.
// Thus all functions between unwrapped types are automatically lifted to functions between the wrapped types.
//
// The parametrized subtype, Sub<T>, has a 'map' method which returns an instance of the subtype, Sub<A>,
// where the type parameter T has been converted to a type A.
//
// The 'map' method inputs a function, T -> A,
// i.e. that function inputs a type T and returns a type A.
//
// Note that since 'map' is a method then it inputs 'this', so when viewed as a function,
// it has the type, Sub<T> -> (T -> A) -> Sub<A>.
//
// For example, given the input parameter to 'map' is a function of type, Int -> String,
// i.e. that inputs an integer and returns a string,
// then a (subtype of IFunctor that is a) list of integers, List<Int>,
// is converted to a list of strings, List<String>.
//
// Mapping is between the same subtype, Sub, i.e. Sub<T> -> Sub<A> and not Sub<T> -> Sub2<A>.
// This is because the subtype may not always be invertible (bijective)
// to the wrapped parametric type, T. For example, since Maybe<T>.none contains no instance of T
// (i.e. it can't be inverted to an unwrapped T), it is conceptually sensible to map it to an empty List<A>.
// However, there is no consistent model that will enable such a mapping and
// hold true in all general cases of destination subtypes.
//
// In category theory, a functor is a model of COMPOSABLE mappings between categories.
// In our case, each category is a concrete realization of the type, Sub<T>,
// i.e. the type parameter T is concrete type.
// Thus, our functor has a mapping from category Sub<T> to Sub<A>, given the morphism function, T -> A.
// A functor follows the laws that insure it is composable:
// http://en.wikipedia.org/w/index.php?title=Functor&oldid=433685926#Definition
// map( g ).map( f ) == map( \x -> f(g(x)) ) F(g) * F(f) == F( g * f )
// u.map( \x -> x ) == u F(id) == id
interface IApplicative< Sub<T> : IApplicative<Sub,T>, T+ > inherits IFunctor<Sub,T>
{
static wrap<A> : A -> Sub<A>
apply<T, A> : Sub<T -> A> -> Sub<A>
}
// Read first the documentation for IFunctor.
//
// Conceptually, an applicative empowers any morphism function, which inputs unwrapped type(s),
// to operate on (an) applicative(s) of those type(s).
// Thus, specialized versions of morphism functions do not need to be written for each applicative subtype.
//
// Using the 'apply' method, a function of type, T -> A, which inputs type T and returns type A,
// can operate on an applicative of type Sub<T>, and return an applicative of type Sub<A>.
//
// Note that if the function inputs multiple parameters, then A will be the curried function type,
// e.g. if the function is of type, T -> B -> C, then the A in T -> A is of type, B -> C.
// Thus the example function would return a value of type C, from two applicative inputs--
// first of type Sub<T>, second of type Sub<B>.
//
// For example, a function of type, Int -> Int -> Int, which adds two integers and returns the result,
// can input two Maybe<Int> and return a Maybe<Int>.
// Thus, if either of the inputs is a Maybe<Int>.none (i.e. "null"), then 'apply' does not call the function
// and a Maybe<Int>.none is returned.
//
// The parametrized subtype of an applicative, Sub<_>, has a static 'wrap' method
// that constructs an instance of itself, Sub<A>,
// from an input argument which is an instance of type, A.
//
// Applicative is a functor that adds a mapping, 'apply', which inputs the morphism function, T -> A,
// wrapped in an instance of its subtype, Sub<T -> A>.
//
// Note that since 'apply' is a method then it inputs 'this', so when viewed as a function,
// it has the type, Sub<T> -> Sub<T -> A> -> Sub<A>.
//
// Wrapping the input morphism function enables multiple 'apply' to be chained,
// so that a multiple parameter morphism function,
// e.g. T -> B -> C -> D where the A in T -> A is of type B -> C -> D,
// can be distributed to multiple instances of applicative, e.g. Sub<T>, Sub<B>, Sub<C>.
// Without the such a model, the equivalent code is excessively verbose
// and must be specialized for each subtype, as shown in examples below, and also at this link:
// http://en.wikibooks.org/wiki/Haskell/Applicative_Functors#Using_Applicative_Functors
//
// For example, given a morphism function of type, Int -> Int -> Int -> Int:
// f : Int -> Int -> Int -> Int = \a, b, c -> a + (b * c)
// Apply this function to 3 lists, List( 1, 2, 3 ), List( 10 ), List( 6, 7 ), as follows:
// List( 1, 2, 3 ).apply( List( 10 ).apply( List( 6, 7 ).apply( List.wrap( f ) )))
// List( 1, 2, 3 ).apply( List( 10 ).apply( List( 6, 7 ).map( f ) ))
// IDEs should optionally allow entry and rendering as if 'apply' and 'map' are operators <*> and <<<
// (TODO: formalize Copute grammar for operator vs. method equivalents):
// List( f ) <*> List( 6, 7 ) <*> List( 10 ) <*> List( 1, 2, 3 )
// f <<< List( 6, 7 ) <*> List( 10 ) <*> List( 1, 2, 3 )
// The result is List( 16, 26, 36, 17, 27, 37 ).
//
// Each applicaion of 'apply' curries the function, f, with the elements of the list:
// List.wrap( f ) == List( f )
// List( 6, 7 ).apply( List( f ) ) == List( f(6), f(7) )
// List( 10 ).apply( List( f(6), f(7) ) ) == List( f(6)(10), f(7)(10) )
// List( 1, 2, 3 ).apply( List( f(6)(10), f(7)(10) ) ) == List( f(6)(10)(1), f(6)(10)(2), f(6)(10)(3), f(7)(10)(1), f(7)(10)(2), f(7)(10)(3) )
// List( f(6)(10)(1), f(6)(10)(2), f(6)(10)(3), f(7)(10)(1), f(7)(10)(2), f(7)(10)(3) ) == List( 16, 26, 36, 17, 27, 37 )
//
// f is a function of type Int -> Int -> Int -> Int List( f ) is type List<Int -> Int -> Int -> Int>
// f(6) is a curried function of type Int -> Int -> Int List( f(6), f(7) ) is type List<Int -> Int -> Int>
// f(6)(10) or f(6,10) is a curried function of type Int -> Int List( f(6)(10), f(7)(10) ) is type List<Int -> Int>
// f(6)(10)(1) or f(6,10,1) returns an Int with value 16 List( 16, 26, 36, 17, 27, 37 ) is type List<Int>
//
// The equivalent code without an applicative model is:
// ref a = List( 6, 7 ), b = List( 10 ), c = List( 1, 2, 3 )
// ref result = List(
// f(a.head, b.head, c.head),
// f(a.head, b.head, c.tail.head),
// f(a.head, b.head, c.tail.tail.head)
// f(a.tail.head, b.head, c.head),
// f(a.tail.head, b.head, c.tail.head),
// f(a.tail.head, b.head, c.tail.tail.head) )
//
// The applicative model expresses the semantics more clearly:
// ref result = List( f ) <*> List( 6, 7 ) <*> List( 10 ) <*> List( 1, 2, 3 )
//
// The Maybe type can wrap the none case:
// Maybe.wrap( f ) <*> Maybe.wrap( 3 ) <*> Maybe.wrap( 4 ) <*> Maybe.wrap( 5 ) == Maybe.wrap( 23 )
// Maybe.wrap( f ) <*> Maybe.wrap( 3 ) <*> Maybe<Int>.none <*> Maybe.wrap( 5 ) == Maybe<Int>.none
//
// The equivalent code without an applicative model is:
// ref a = Maybe.wrap( 3 ), b = Maybe.wrap( 4 ), c = Maybe.wrap( 5 ) // or b = Maybe<Int>.none
// ref result = \ {
// switch( a ) {
// case none: return a
// case is( ia ):
// switch( b ) {
// case none: return b
// case is( ib ):
// switch( c ) {
// case none: return c
// case is( ic ): return Maybe.wrap( f(ia,ib,ic) )
// }
// }
// }
// }()
//
// The above equivalent code is computationally more efficient than the applicative model,
// because the case 'none', can short-circuit the subsequent tests for case 'none'.
// Whereas, the applicative model tests for case 'none' on every 'apply' call, i.e. innermost 'apply' called first,
// and Maybe<Int>.none is returned (instead of for example Maybe.wrap( f(3)(4) )) to be input by the next 'apply':
//
// c.apply( b.apply( a.apply( Maybe.wrap(f) )))
//
// In contrast, since IMonad.bind inputs an unwrapped morphism function that returns a type of Sub<A> instead of A,
// thus IMonad.bind are chained by nesting lamba functions, i.e. outermost bind called first,
// which can thus be short-circuited:
//
// c.bind( \ic -> b.bind( \ib -> a.bind( \ia -> Maybe.wrap(f(ia,ib,ic)) )))
//
// However, this ability to short-circuit has the cost that IMonad does not generally compose:
// The Essence of the Iterator Pattern, Gibbons & Oliveira, sec2.3 on pg 5, sec3 on pg 6, sec3.3 on pg 8
// Applicative programming with effects, McBride & Paterson, sec5 on pg 8
// http://blog.tmorris.net/monads-do-not-compose/
//
// I was originally incorrectly thinking that an empty list would conflict with the semantics of applicative:
// http://goldwetrust.forumotion.com/t112p150-computers#4294 (this link also contains an idea for avoiding exceptions with Nil)
// However, I now realize that an empty list is analgous to Maybe<T>.none,
// such that an empty list should be returned from chained 'apply'.
//
// Every subtype must fulfill the following laws from category theory.
// The Essence of the Iterator Pattern, Gibbons & Oliveira, sec3 on pg 6 (laws insure any applicative expression can be converted to the canonical form)
// Applicative programming with effects", McBride & Patterson, sec2 on pg 3
// (i) wrap id 'ap' u = u u.apply( wrap( \t -> t ) ) == u
// (ii) wrap (*) 'ap' f 'ap' g 'ap' u = f 'ap' (g 'ap' u) u.apply( g.apply( f.apply( wrap( \f,g,u -> f(g(u)) )))) == u.apply( g ).apply( f )
// (iii) wrap f 'ap' wrap x = wrap (f x) wrap( x ).apply( wrap(f) ) == wrap( f(x) )
// (iv) u 'ap' wrap x = wrap (\f -> f x) 'ap' f wrap( x ).apply( f ) == f.wrap( \f -> f(x) )
mixin Applicative< Sub<T> : Applicative<Sub,T>, T+ > inherits IApplicative<Sub,T>
{
map = f -> apply( Sub.wrap(f) )
}
// Default implementation of IFunctor.map given an IApplicative.
//
// IFunctor.map has an input parameter f that is type, T -> A, thus Sub.wrap(f) returns type, Sub<T -> A>,
// and thus 'apply' returns type Sub<A>, which is the return type of IFunctor.map.
interface IMonad< Sub<T> : IMonad<Sub,T>, T+ > inherits IApplicative<Sub,T>
{
bind<A> : (T -> Sub<A>) -> Sub<A>
static join< M<A> : IMonad<M,A>, A > : IMonad<_,M> -> M<A> = nested -> nested.bind( \t -> t )
}
// Read first the documentation for IApplicative.
//
// Conceptually, monad enables morphism functions, T -> A, which operate on unwrapped types,
// to compose with functions, T -> Sub<A>, which return the wrapped subtype.
//
// Normal function composition means given function, f : A -> B, and function, g : T -> A,
// then we can write a composition function, h : (T -> A) -> (A -> B) -> T -> B = \g, f, x -> f( g(x) ),
// which inputs two functions and an instance of type T, then returns an instance of type B.
// Monad allows this composition where even if zero, one or both of the functions are instead, f : A -> Sub<B>, or g : T -> Sub<A>.
//
// To gain insight into why monadic composition is useful, consider that each instance of the Maybe<T> monad,
// contains either an instance of T or the value 'none' (i.e. conceptually similar to 'null').
// Thus functions, g, that return Maybe<A> can be composed with functions, f, that input an A.
// h : (T -> Maybe<A>) -> (A -> B) -> T -> B = \g, f, x -> g(x).map( f )
// Otherwise, we would create boilerplate to test the return value for the 'none' value, as follows.
// h : (T -> Maybe<A>) -> (A -> B) -> T -> B = \g, f, x { t = g(x); -> t != Maybe<A>.none ? f(t) : t }
// But it isn't just the small amount of boilerplate that is avoided for the Maybe monad,
// but different boilerplate for each subtype of monad, some which might be more lengthy. e.g. the State monad.
// And with subtype specific boilerplate, the generalized IMonad interface couldn't be referred to in arguments and return values.
// http://blog.sigfpe.com/2006/06/monads-kleisli-arrows-comonads-and.html
//
// Monad is an applicative model that adds a method 'bind',
// that inputs the morphism function, T -> Sub<A>, which returns Sub<A>.
//
// Functions which operate on unwrapped types, T -> A, can be input to 'map', or composed with the static 'wrap' method,
// and the composition function input to 'bind' (but 'map' might be more optimized).
//
// Note that since 'bind' is a method then it inputs 'this', so when viewed as a function,
// it has the type, Sub<T> -> (T -> Sub<A>) -> Sub<A>.
//
//
// Unlike applicative, a fully generalized composition of monads is not possible.
// The Essence of the Iterator Pattern, Gibbons & Oliveira, sec2.3 on pg 5, sec3 on pg 6, sec3.3 on pg 8
// Applicative programming with effects, McBride & Paterson, sec5 on pg 8
// http://blog.tmorris.net/monads-do-not-compose/
// Because the monad is more powerful in that the outermost of a composition executes before innermost.
// c.bind( \ic -> b.bind( \ib -> a.bind( \ia -> Maybe.wrap(f(ia,ib,ic)) )))
// Which is opposite of applicative.
// c.apply( b.apply( a.apply( Maybe.wrap(f) )))
//
// Monad's 'bind' makes possible a 'join' method that flattens a "wrapping of a wrapping" to a wrapping.
// Imagine a container of container(s) of a type, e.g. List<List<T>>, flattened to a container of that type, e.g. List<T>.
//
// Every subtype must fulfill the following laws from category theory:
// Comprehending Monads, Wadler, pg 3
// Monad laws, Monads for functional programming, Wadler, sec3
// The laws involving join, follow from the laws for a monoid, http://apocalisp.wordpress.com/2010/07/21/more-on-monoids-and-monads/
// (i) map id = id \t -> t.map( \u -> u ) == \t -> t
// (ii) map (f * g) = map f * map g i.map( \t -> f( g(t) ) ) == i.map( \t -> g(t); ).map( \t -> f(t) )
// (iii) map f * wrap = wrap * f \t -> wrap( wrap(t) ).map( f ) == \t -> wrap( f( wrap(t) ) )
// (iv) map f * join = join * map (map f) \t -> join( wrap( wrap(t) ) ).map( f ) == \t -> join( wrap( wrap(t) ).map( wrap(t).map(f) ) )
// (I) join * wrap = id \t -> join( wrap(t) ) == \t -> t
// (II) join * map wrap = id \t -> join( t.map( \u -> wrap(u) ) == \t -> t
// (III) join * map join = join * join \t -> join( t.map( \u -> join(u) ) == \t -> join( join(t) )
mixin Monad< Sub<T> : Monad<Sub<T>,T>, T+ > inherits IMonad<Sub,T>
{
map = f -> bind( \t -> Sub.wrap( f(t) ) )
apply = mf -> mf.bind( \f -> map(f) )
}
// Default implementation of IFunctor.map and IApplicative.apply given an IMonad.
//
// IFunctor.map has an input parameter f that is type, T -> A,
// thus the function, \t -> Sub.wrap( f(t) ), has type, T -> Sub<A>,
// and thus 'bind' returns type Sub<A>, which is the return type of 'map'.
//
// IApplicative.apply has an input parameter mf that is type, Sub<T -> A>,
// thus the function, \f -> map(f), has type, (T -> A) -> Sub<A>,
// because 'map' has the type, (T -> A) -> Sub<A>.
// Thus 'mf.bind' returns type Sub<A>, which is the return type of 'apply'.
interface IMState<S,T+> inherits Monad<IMState<S,_>,T> // IMState<S,_> is partial type application, because IMonad expects Sub<T>, not Sub<T,S>. Scala's partial type application syntax is more verbose, see section 5 of "Generics of a Higher Kind", and: http://stackoverflow.com/questions/6247817/is-it-possible-to-curry-higher-kinded-types-in-scala/6248296#6248296
{
m (public, const) : impure (S -> <(S,T)>)
// Overload 'map' to support morphism functions that operate on state
impure map<A> : impure (<(S,T)> -> A) -> IMState<S,T>
}
interface IState<S,T+> inherits Monad<IState<S,_>,T> // IState<S,_> is partial type application, because IMonad expects Sub<T>, not Sub<T,S>. Scala's partial type application syntax is more verbose, see section 5 of "Generics of a Higher Kind", and: http://stackoverflow.com/questions/6247817/is-it-possible-to-curry-higher-kinded-types-in-scala/6248296#6248296
{
m (public, const) : (S -> <(S,T)>)
// Overload 'map' to support morphism functions that operate on state
map<A> : (<(S,T)> -> <(S,A)>) -> IState<S,T>
}
class MState<S,T+> inherits IMState<S,T>
: (S -> <(S,T)>) = \m // <(S,T)> is a shortcut for Tuple2<S,T>
{
m = m
wrap = \t -> State<S,T>( \x -> <(x,t)> )
bind = \k -> State<S,T>( \x
{
<(y,b)> = m(x) // shortcut for, r : Tuple2 = m(x); y = r._0; b = r._1
-> k(b).m(y)
} )
impure map<A> : impure (<(S,T)> -> A) -> IMState<S,T> = \k -> MState<S,T>( \x
{
<(y,b)> = m(x)
-> <( y, k(y,b) )> // 'k' may be impure if it modifies state referenced by 'y'
} )
}
class State<S,T+> inherits IState<S,T>
: (S -> <(S,T)>) = \m // <(S,T)> is a shortcut for Tuple2<S,T>
{
m = m
wrap = \t -> State<S,T>( \x -> <(x,t)> )
bind = \k -> State<S,T>( \x
{
<(y,b)> = m(x) // shortcut for, r : Tuple2 = m(x); y = r._0; b = r._1
-> k(b).m(y)
} )
map<A> : (<(S,T)> -> <(S,A)>) -> IState<S,T> = \k -> State<S,T>( \x -> k(m(x)) ) // 'k' is pure because it returns a copy of the (potentially modified) state
}
// Read first the documentation for IMonad.
//
// Conceptually, the state monad enables composition of morphism functions, T -> A,
// which do not operate on the wrapped state of type, S,
// with stateful morphism functions that operate on the tuple of the wrapped state, <(S,T)> -> <(S,A)>.
//
// <(S,T)> -> <(S,A)> is short-hand for Tuple2<S,T> -> Tuple2<S,A>,
// where TupleN is a convenient way of passing N instances around in one instance.
//
// Since the stateful morphisms input the wrapped state, the State.map method is used to compose them,
// instead of the 'bind' method. Note 'bind' is called by the default mixin implementation
// of Monad.apply.
//
// The state of the state monad is unbounded sequence of nested functions of type, S -> <(S,T)>.
// This enables 'map' (via its call of 'bind') to thread the state across morphism functions, T -> A,
// which do not operate on the state, S.
// This threading is equivalent to the normal function composition of these morphism functions.
// The benefit of using 'map' is that stateful morphism functions can also be mixed into the composition.
//
// When exclusively not composing impure morphism functions that operate on state (i.e. that do not return a copy of modified state),
// the state monadic composition of morphism functions,
// is not imperative and any order-dependency is no different conceptually than the
// normal composition of any pure functions:
// http://goldwetrust.forumotion.com/t112p180-computers#4434
//
// Method 'bind' inputs a morphism function that does not operate on state,
// and threads (bridges) the state across this function call,
// so the state is available to a nested 'bind', 'map', or 'apply'.
//
// Method 'map' is overloaded to support (both pure and impure) morphism functions that operate on state.
//
// See the example usage of the state monad in the documentation for ITraversable
interface IMonoid< Sub : IMonoid<Sub> >
{
// An instance which is commutative as either operand to append.
// http://en.wikipedia.org/wiki/Identity_element
static identity : Void -> Sub
// An instance with the input instance appended.
append : Sub -> Sub
}
// Monoid has a default instance which is commutative over accumulation, and an associative accumulation function.
//
// Every subtype must fulfill the following laws from category theory:
// append * identity = id \t -> t.append( identity ) == \t -> t
// identity * append = id \t -> identity.append( t ) == \t -> t
// (append a) append b = append (a append b) \t,a,b -> t.append( a ).append( b ) == \t,a,b -> t.append( a.append( b ) )
interface IDualMonoid< Sub : IDualMonoid<Sub> > inherits IMonoid<Sub>
{
prepend : Sub -> Sub
}
// Dual of monoid appends in the opposite direction.
interface IMonoidHigherKind< Sub<T> : IMonoidHigherKind<Sub,T>, T+ > inherits IMonoid<Sub>
{
static identity : Void -> Sub<T>
append<A <: T> : Sub<A> -> Sub<A>
append<A <: T> : A -> Sub<A>
}
// Monoid parametrized on covariant T.
interface IDualMonoidHigherKind< Sub<T> : IDualMonoidHigherKind<Sub,T>, T+ > inherits IMonoidHigherKind<Sub,T>
{
prepend<A <: T> : Sub<A> -> Sub<A>
prepend<A <: T> : A -> Sub<A>
}
// Dual of monoid, parametrized on covariant T, appends in the opposite direction.
interface ITraversable< Sub<T> : ITraversable<Sub,T>, T+ >
{
traverse< F<_> : IApplicative<F,_>, A > : (T -> F<A>) -> F<Sub<A>>
traverse< F<_> : IApplicative<F,_>, A, C > : (Void -> C) -> (T -> C) -> (C -> T -> C) -> (T -> F<A>) -> F<Sub<A>>
accumulate< M : IMonoid<M> > : (T -> M) -> M
accumulate< M<A> : IMonoidHigherKind<M,A>, A > : (T -> A) -> M<A> -> M<A>
static dist< R<_> : ITraversable<R,_>, F<_> : IApplicative<F,_>, A > : R<F<A>> -> F<R<A>> = nested -> nested.traverse( \t -> t )
static concat< R<M> : ITraversable<R,M>, M : IMonoid<M> > : R<M> -> M = nested -> nested.accumulate( \t -> t )
static concat< R<M<A>> : ITraversable<R,M<A>>, M<A> : IMonoidHigherKind<M,A>, A > : R<M<A>> -> M<A> = nested -> nested.accumulate( \t -> t )
}
// Conceptually, traverse iterates the contained instances of the parameter T,
// enabling both mapping and accumulating with a single pass of the iteration.
//
// Each contained instance of T is input to a mapping function, T -> F<A>, which returns an applicative, F.
// Note if A == T, then this mapping function is just IApplicative.wrap.
// These applicative are accumulated by a two parameter function, by employing the applicative model of function application.
// Since traversable might contain only zero or one instances of T,
// accumulation functions 'identity' and 'lift', which respectively input zero and one parameters, are also required.
//
// In Haskell, see [1] (but note 'accum' replaces 'f', added 'identity' and 'lift' parameters to generalize it):
// traverseList : Applicative m => (() -> c) -> (b -> c) -> (c -> b -> c) -> (a -> m b) -> [a] -> m c
// traverseList identity lift accum map [] = pure identity
// traverseList identity lift accum map (head:[]) = pure (lift head)
// traverseList identity lift accum map (head:tail) = (pure accum) <*> (traverse identity lift accum map tail) <*> (map head)
//
// In Copute, where traversable (which is a list in the Haskell example) is generalized to IHeadable:
// traverse = \identity, lift, accum, map
// {
// if( head == Maybe<T>.none )
// -> F.wrap( identity )
// else if( tail == Maybe<T>.none )
// -> F.wrap( lift(head) )
// else
// -> traverse( identity, lift, accum, map, tail ).apply( map( head ).apply( F.wrap(accum) )) // F.wrap(accum) <*> traverse( identity, lift, accum, map, tail ) <*> map( head )
// }
//
// Thus 'map' lifts the first contained instance 'head' to an applicative,
// which is input to the second parameter of 'accum' using the applicative model of function application.
// This is repeated recursively on 'tail', thus 'accum' inputs its own output type as its first parameter.
// Thus the return type is the applicative parametrized on the output type of 'accum'.
//
// We have generalized the traverse previously proposed in literature (see [1]),
// so that it possible to map with no-op accumulation function, or vice versa.
// Also our generalization allows an accumulation function to map the output container type.
// The Haskell example in the literature is obtained from our generalization as follows.
// traverseList f list = traverseList [] (\x -> [x]) (:) f list
//
// Thus traverse distributes a function over each T, and returns the Applicative of Traversable,
// instead of Traversable of Applicative, as IFunctor.map would do.
//
// Thus traverse combines the IFunctor.map and ITraversable.dist into a single walk of T.
//
// State monad can be composed with traverse to fold the container into any state type.
// traverse<IState,String>( \t -> State<Int,String>( \x -> <(x+1,t)> )).m(0) == <(count, ITraversable<_,String>)> // count a collection of String and create a new copy of the collection String
// traverse<IState,Int>( \t -> State<Int,Int>( \x -> <(x+1,t.toInt)> )).m(0) == <(count, ITraversable<_,Int>)> // count a collection of String and map to collection of Int
// traverse<IState,Int>( \x -> 0, \x -> 0, \x -> 0, \t -> State<Int,Int>( \x -> <(x+1,0)> )).m(0) == <(count, 0)> // count a collection of String, without creating a copy of the collection
//
// References:
// [1] Traversing data structures, Applicative programming with effects, McBride & Paterson, sec3 on pg 6
// Idiomatic traversal, The Essence of the Iterator Pattern, Gibbons & Oliveira, sec3.4 on pg 8
mixin Traverse< Sub<T> : Traverse<Sub,T>, T+ > inherits ITraversable<Sub,T>, IMonoidHigherKind<Sub,T>, IHeadable
{
traverse = f ->
{
if( isA<IHead> )
-> f( head ).apply( tail.traverse(f).apply( F.wrap( append ) ) )
else
-> F.wrap( identity )
}
}
// Default implementation of ITraversable.traverse, where given IMonoidHigherKind and IHeadable.
//
// See "traverse f (x : xs) = [ (:) (f x) (traverse f xs) ]" in Section 3 Traversing data structures, Applicative programming with effects, McBride and Paterson
// x is head,
// xs is tail,
// (f x) has type F<A>,
// (traverse f xs) has type F<Sub<A>>,
// (:) is IMonoidHigherKind.append which has type Sub<A> -> A -> Sub<A>,
// and inside the brackets [ and ] implies F.wrap( (:) ) and F.apply between each operand as explained on page 4.
mixin Accumulate< Sub<T> : Accumulate<Sub,T>, T+ > inherits ITraversable<Sub,T>, IHeadable
{
accumulate = f ->
{
if( isA<IHead> )
-> f( head ).append( tail.accumulate(f) )
else
-> M.identity
}
accumulate = \f, start ->
{
if( isA<IHead> )
-> tail.accumulate( f, start.append( f(head) ) )
else
-> start
}
}
// Default implementation of ITraversable.accumulate, where given IHeadable.
- Code:
sources:
{ source }+
source:
class
reference
expression
ifElse // TODO: move this to 'primary' and change 'then' to use 'exprSansIfelse' instead of 'expression', and add 'else' to that line
scope:
'{' [ NL ] sources [ NL ] '}'
reference:
[ refType ] ID typeInit // without refType, defaults to 'cref' for pass-by-reference, and 'var' for pass-by-value, instances
refType ID init // TODO: remove this once we are ready to do type inference for assignment expression, see comment for //int below
refType:
'const' // the instance can not be modified, regardless if it is pass-by-reference or pass-by-value
'ref' // the reference can be modified to point to a different instance, applies only to pass-by-reference instances
'const' 'ref' // combines the above meanings, note the reference is not constant.
'var' // 'const' 'var' is not allowed because has same meaning as 'const'
'cref' // 'const' 'cref' is not allowed because has same meaning as 'const'
typeInit:
: nfType [ [ NL ] typeInit2 ] // init is optional if cType has a default constructor
tArgList [ NL ] ':' fType [ NL ] '=' [ NL ] function
//init when refType is not specified, this is already in the grammar as the assignment expression. Disambiguate from assignment when ID is not already declared. Note this means a typo on ID will silently fail assignment and create a reference that is never accessed (so maybe we can detect on compilation and error). To hide an ID of same name in outer scope, then refType must be specified.
init:
'=' [ NL ] rightExpr
typeInit2:
init
nextType [ NL ] '=' [ NL ] function
nfType:
cType // include standard classes for the builtin types, Int, Float, String, Bool
fType:
nfType [ NL ] nextType
fType2
fType2:
'(' fType ')' [ NL ] nextType
'void' [ NL ] '->' [ NL ] xtype // 'void' not an instance reference type, thus can not be a class, only allowed as a single argument or return type
xtype:
nfType
'(' fType ')'
type:
nfType [ [ NL ] nextType ]
fType2
nextType:
'->' [ NL ] xtype [ [ NL ] nextType ]
'->' [ NL ] 'void' // 'void' not an instance reference type, thus can not be a class, only allowed as a single argument or return type
function:
fDecl [ NL ] fBody
fDecl:
[ purity ] '\' [ fArgs ]
fArgs:
fArg [ [ NL ] ',' [ [ NL ] fArgs ] ]
fArg:
[ '&' ] ID [ init ] // the optional '&' is for pass-by-expression, i.e. lazy evaluation, i.e. argument expression is implicitly wrapped in a function call that returns the evaluated expression. Note not allowed for return type.
purity:
'impure'
'mutable'
fBody:
'->' [ NL ] rightExpr
scope
expression:
assign
rightExpr
assign:
leftExpr [ NL ] assignop [ NLI ] rightExpr
leftExpr:
ID { [ NL ] memberRef } // only a named reference, or its class properties, may be assigned to. This simplifies our grammar significantly. Since we are targetting pure programming, we discourage the assignment to a reference which is an unsaved return value-- because in pure programming we can only return a new instance and it thus has not been saved any where, thus would only be useful for impure, side-effect programming. Such impure programming can still be done (save a return value to a reference before assigning to it). Note, a method which inputs a 'void' can be called without '()', so semantic analysis should not allow assign to return value of such, nor its class properties.
//conflicts with call below: '(' assign ')' { [ NL ] memberRef }+ // an assign has saved a reference in an ID, so we can assign to its class properties.
memberRef:
'.' [ NLI ] ID // no [ NL ] after the operator, http://code.google.com/p/copute/issues/detail?id=31
rightExpr:
boolOr [ [ NL ] '?' [ NLI ] assign [ NL ] ':' [ NLI ] assign ] // aka 'ternary' operator
boolOr:
boolAnd { [ NL ] '||' [ NLI ] boolAnd }
boolAnd:
bitOr { [ NL ] '&&' [ NLI ] bitOr }
bitOr:
bitXor { [ NL ] '|' [ NLI ] bitXor }
bitXor:
bitAnd { [ NL ] '^' [ NLI ] bitAnd }
bitAnd:
equality { [ NL ] '&' [ NLI ] equality }
equality:
compare { [ NL ] equalityop [ NLI ] compare }
compare:
extrapolate { [ NL ] compareop [ NLI ] shift }
shift:
concat { [ NL ] shiftop [ NLI ] concat }
concat:
add { [ NL ] '#' [ NLI ] add }
add:
multiply { addop [ NLI ] multiply } // see NLPLUS and NLMINUS, we can't put a NL in front of addop, because NL also separates expr, and unary contains an addop on its left-side in prefixop
multiply:
unary { [ NL ] multiplyop [ NLI ] unary }
unary:
[ prefixop ] [ postfixop ] instance [ postfixop ]
instance:
primary { [ NL ] memberRef } [ callExpr ] // Semantic analysis must determine if 'memberRef' and/or 'callExpr' is allowed on 'primary'. Also, a function call might not return an instance (i.e. 'void') for an impure function that has side-effects.
'new' [ NL ] cDecl call
primary:
ID
'(' [ NL ] expression [ NL ] ')'
'(' [ NL ] function [ NL ] ')'
switch
callExpr:
call { callSuffix }
callSuffix:
call
[ NL ] memberRef
call:
'(' [ NL ] fParams [ NL ] ')'
fParams:
expression [ [ NL ] ',' [ [ NL ] fParams ] ]
ifElse:
'if' '(' rightExpr ')' then
then:
[ NLI ] expression // use NLI instead of NL, http://code.google.com/p/copute/issues/detail?id=28
[ NL ] scope [ [ NL ] 'else' block ] // if there is an 'else', force braces on the 'if', http://code.google.com/p/copute/issues/detail?id=21#c2
block:
[ NLI ] expression // use NLI instead of NL, http://code.google.com/p/copute/issues/detail?id=28
scope
switch:
'switch' '(' [ NL ] expression [ NL ] ')' [ NL ] '{' [ NL ] { case } [ default ] '}' // We force the '(' and ')' because, http://code.google.com/p/copute/issues/detail?id=30&can=1
case:
'case' rightExpr ':' [ NL ] sourcesOrEmpty [ NL ] // allow empty case ';', so that user can do noop for specific cases without a default, to make sure all cases are coded
sourcesOrEmpty:
sources
';'
default:
'default' ':' [ NL ] sourcesOrEmpty [ NL ]
class:
genre ID [ [ NL ] tArgList ] [ NL ] cDecl // if INTERFACE, then ID must begin with "I[A-Z]", else "[A-HJ-Z][^A-Z]", http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
anonyClass:
genre [ [ NL ] funcArgs ] [ NL ] cDecl
genre:
'case'
'class'
'mixin'
'interface'
tArgList:
'<' tArgs '>'
tArgs:
[ NL ] tArg [ [ NL ] ',' [ tArgs ] ]
tArg:
[ variance ] ID [ tArgList ] [ tConstrain ] // the 'tArgList' is for a higher-kinded type parameter
variance:
'+'
'-'
tConstrain:
subTypeOf [ NL ] type [ superTypeOf [ NL ] type ]
superTypeOf [ NL ] type
subTypeOf:
':'
superTypeOf:
'<:'
cDecl:
[ inherit [ NL ] ] [ constructor [ NL ] ] cBody // semantic analysis must not allow 'constructor' for 'interface' and 'mixin'
inherit:
'inherits' cTypes
constructor:
[ 'private' [ NL ] ] ':' nfType [ NL ] [ nextType [ NL ] ] '=' [ NL ] fDecl
cTypes:
[ NL ] cType [ [ NL ] ',' [ cTypes ] ]
cType:
ID [ tParamList ] // ID is class
anonyClass
tParamList:
'<' types [ NL ] '>'
cBody:
'{' [ [ NL ] cMember { [ ',' ] [ NL ] cMember } ] [ NL ] '}'
cMember:
[ 'private' ] cMemberSuffix
cMemberSuffix:
cmBody
class // Note, this can be a CASE
cmBody:
[ 'static' ] ID [ etters ] [ [ NL ] typeInit ] // this can be a 'CASE ID', when 'genre' is 'interface', because no ID without 'etters' in 'interface'
'case' ID
etters:
'(' access ',' access ')' // The modifier that precedes ID must not be 'private'
access:
'public'
'new'
'private'
// Operator sets
prefixop:
addop // http://stackoverflow.com/questions/2624410/what-is-the-purpose-of-javas-unary-plus-operator
NLPLUS // http://code.google.com/p/copute/issues/detail?id=23, where this terminal will conflict with an NL or addop expected, otherwise process as PLUS...
NLMINUS // ...or MINUS
'!'
'~'
TYPEOF // can be used with type parametrization to fork functionality, although this is equivalent to overloading and/or using an case class, so I may deprecate this
postfixop:
'++'
'--'
multiplyop:
'*'
'/'
'%'
addop:
'+'
'-'
shiftop:
'<<'
'>>'
'>>>'
compareop:
// '<' // use MORE instead and flip operands, conflicts with sequence type ('tParamList')
'>'
'<='
'>='
equalityop:
'=='
'!='
=== // compare the instance data (or overloaded), same as EQUAL for pass-by-value types
!== // compare the instance data (or overloaded), same as NOTEQUAL for pass-by-value types
assignop:
'='
'&='
'|='
'^='
'*='
'\='
'%='
'+='
'-='
'<<='
'>>='
'>>>='
'#='
Last edited by Shelby on Sat Jun 25, 2011 2:56 pm; edited 1 time in total
(Haskell's) Laziness eliminates reasoning about time and space
Which is a positive:
http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#Advantages
But it is also a negative, because you can't then control well time, space, and concurrency:
http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#Allocation_Size_Determinism
https://goldwetrust.forumotion.com/t112p195-computers#4487
http://Copute.com
Skeptical?
| Purity
| | Eager vs. Lazy
On the positive side, lazy purity may be theoretically as fast as imperative, whereas non-lazy purity may add a log factor:
https://goldwetrust.forumotion.com/t112p180-computers#4437
http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#Advantages
...enables composition of functions over a large, sparse (even infinite), or diverse data space without conflating the implementation of the granularity on the space[4], e.g. the imperative example could not do logical recursion on fibs() for list-granularity.
[4]John Hughes (1984). "Why Functional Programming Matters" (¶8 §4, "Glueing Programs Together")
But it is also a negative, because you can't then control well time, space, and concurrency:
http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#Allocation_Size_Determinism
https://goldwetrust.forumotion.com/t112p195-computers#4487
http://Copute.com
Skeptical?
| Purity
| | Eager vs. Lazy
On the positive side, lazy purity may be theoretically as fast as imperative, whereas non-lazy purity may add a log factor:
https://goldwetrust.forumotion.com/t112p180-computers#4437
Last edited by Shelby on Thu Aug 25, 2011 12:49 pm; edited 4 times in total
Copute's unique feature to maximize modularity and avoid LSP errors
Prior discussion of Gilad Bracha's "Constructors are Harmful":
https://goldwetrust.forumotion.com/t112p135-computers#4191
The following makes some good points, and I applaud the clear explanations of the benefits of the pure lambda calculus.
However, OOP (subtyping) is not arbitrary, because it enables more deterministic partitioning (divide-and-conquer) of the design space, which can aid in the ability to reason about large design spaces (imagine the entire body of software being granularly reusable), as compared to (Haskell's or Scalaz's emulation using implicits) structural subtyping (ad-hoc polymorphism).
I am working on a granular approach to mixing the pure lambda calculus and subtyping.
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/
My followup on my above assertion:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4534
Another of my followups:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
https://goldwetrust.forumotion.com/t112p135-computers#4191
I have refined the solution since I wrote the above. Above I was proposing that static factories in an interface could have their return type parametrized to the implemented subtype class. That does not solve the problem. Static factories in an interface are necessary for other SPOT (single-point-of-truth) and boilerplate elimination reasons, e.g. see my implementation of 'wrap' (a/k/a 'unit') in an IMonad (IApplicative), and notice how much more elegant it is than the equivalent Scalaz. Note SPOT is also a critical requirement for maximizing modularity, i.e. ease of composition and reuse.
Rather to accomplish abstraction of constructors, we need is to nudge programmers to input factory functions, so that any code can be abstracted over another subtype of an 'interface' (i.e. instead of calling a 'class' constuctor directly, then input a factory function which returns the result of calling a constructor, thus the caller can change the subtype being constructed). So the important point is that we want to force programmers to create an 'interface'(s) for all their 'class' methods, which is accomplished by not allowing method implementation (i.e. 'class' nor 'mixin') to be referenced any where except in the 'inherits' declaration and constructor call. This means the type of an instance reference can not contain an identifier which is a 'class' nor 'mixin', thus forcing the type of all instance references to contain identifiers which are an 'interface', i.e. instance references reveal the abstract interface, but do not indicate the implementation.
So Copute will have a crucial difference from Scala and the other contenders (e.g. Ceylon), in that 'interface' and 'mixin' will be separated (not conflated in a 'trait'), and only 'interface' can appear in the type of instance references. Note that in Copute (just like for a 'trait' in Scala) 'mixin' and 'interface' may not have a constructor. Scala's linearised form of multiple inheritance is retained.
Note this unique feature of Copute (along with the elimination of virtual methods, a/k/a 'override') is also necessary to enforce the Liskov Substitution Principle (which relates to the concepts of covariance and contravariance):
(note some of the following specification is out-of-date with current specification in grammar and in my various notes around)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
http://copute.com/dev/docs/Copute/ref/class.html#Static_Duck_Typing
http://copute.com/dev/docs/Copute/ref/function.html#Overloading
The following makes some good points, and I applaud the clear explanations of the benefits of the pure lambda calculus.
However, OOP (subtyping) is not arbitrary, because it enables more deterministic partitioning (divide-and-conquer) of the design space, which can aid in the ability to reason about large design spaces (imagine the entire body of software being granularly reusable), as compared to (Haskell's or Scalaz's emulation using implicits) structural subtyping (ad-hoc polymorphism).
I am working on a granular approach to mixing the pure lambda calculus and subtyping.
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/
On the utility of the FP approach
Referential transparency buys you modularity (the extent to which components of a program can be separated and recombined) and compositionality (the extent to which the whole program can be understood by understanding the components and the rules used to combine them).
Type systems also get more sophisticated to the extent that programs are pure. It’s easier to infer types for pure programs than it is for impure ones, for example. You also gain things like type constructor polymorphism (a.k.a. higher-kinded types) and higher-rank types. It’s my experience that in a sufficiently sophisticated program (such as a compiler for a programming language), you either use these abstractions or you repeat yourself.
Sophisticated type systems then in turn aid in program inference, which is the extent to which the computer (or the programmer with the aid of the computer) can infer the correct program from type information (think tab completion, only moreso). Program inference is currently an active area of research.
As a side-note a lot of OO folks are discovering the functional approach as a tool to aid in modular design. They call it “dependency injection”, “inversion of control”, “interpreter pattern” and the like.
A word on OO and the arbitrary
In OO, as commonly practiced, the choice of distinguished argument is arbitrary. Consider this function:
- Code:
K(x, y) = x
If we were to write this in OO style, then on which object, x or y, should the function K be dispatched? Should it be x.K(y), or y.K(x)? It’s arbitrary.
On “types of languages”
I want to get clear on some concepts. First the question of “types of programming languages”. I don’t think it’s helpful to divide programming languages into “functional”, “object-oriented”, “imperative”, etc. Sure, certain languages are better suited to certain styles of programming, but it’s important to differentiate on essentials and not mere aesthetics.
Functional programming is a restriction that the programmer puts on his programs. Having a language with a compiler that will warn you if you’re breaking referential transparency is helpful, but not essential. I do functional programming in Java, for example, which most people consider an “imperative OO” language.
My followup on my above assertion:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4534
Another of my followups:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
I think it is important to note that OOP and imperative programming are orthogonal concepts (the quote of Jason in the article appears to have conflated them):
http://en.wikipedia.org/wiki/Referential_transparency_(computer_science)#Contrast_to_imperative_programming
Imperative programming is the antithesis of referential transparency, in that the former must be evaluated in a specific order. Whereas, the OOP concepts encapsulation (i.e. information hiding), inheritance, interface, and polymorphism can be referentially transparent.
There is an equivalence (not a dichotomy) between FP and OOP in terms of the "Expression Problem" (Wadler), in that it can be solved by either adding new functions or case classes respectively.
So imperative programming is the "evil" which FP should be criticizing, not OOP.
Replace "equivalence" with "duality" (the mathematics definition, not the "dichotomy" definition where dichotomy means logical opposite complement or mutually exclusive contradiction). FP and OOP are not dichotomic paradigms (rather they are orthogonal, i.e. mutually independent, and can be mixed), and they are duals in terms of their equivalent ability to solve the "Expression Problem".
http://en.wikipedia.org/wiki/Duality_(mathematics)
Note, above I am referring the exclusive notion of FP, i.e. referential transparency. There is also an inclusive notion of FP which muddles definitions and understanding (and leads to conflation of terms):
http://www.artima.com/weblogs/viewpost.jsp?thread=166742
Also, replace "or case classes" with the more general "or subtypes".
http://beust.com/weblog2/archives/000490.html (read from Brian Slesinsky's comment down)
http://lambda-the-ultimate.org/node/2927#comment-43268
The point is the one I already made. Imperative is always the antithesis of side-effects-free. Pure (referentially transparent) is never imperative and vice-versa. They are dichotomies, i.e. mutually exclusive. If I am wrong, I would appreciate an example? (see discussion of your example below)
1) Unless I have misunderstood your description of your point-of-view, it seems you are referring to the inclusive notion, which may be more clear from reading #2 below.
2) Order-of-execution dependence (i.e. imperative) is not side-effects-free, and thus is not pure FP. If your sub-expressions are pure FP, but your composition of them is imperative (sequenced in an execution order), then the dichotomic paradigms composition is not pure (even though the sub-expressions are), i.e. they do not mix without choosing one of the contradictions. If the composition of the pure FP expressions is not execution order dependent, then it is pure and not imperative. Note, I assert that it is possible for a pure function to contain imperative composition which calls only pure functions (the rules I am working on have a few more requirements), thus the contradiction shifts back to pure (this is what I mean by my work on a granular mixing), yet while the function is pure, the internals of the function remain order-dependent and impure.
Referential transparency means the function call can be replaced with its cached return value, any where in the program it was called. It thus follows that order-of-execution does not matter, except in a trivial case. In pure FP, the hierarchy of function calls dictates an order-of-execution only where there is no reuse of a function, because the compiler is free to compute and cache values of a sub-function call before executing its caller, and to execute a pure function with different sets of arguments concurrently.
Rúnar said:
"Pure and imperative are orthogonal concepts"
Disagree, they are dichotomies, i.e. mutually exclusive (dependent because they are contradictory, can not be mixed without one overriding the other). Orthogonal means mutually inclusive (independent, freely intermixed without either paradigm contradicted).
Rúnar said:
"See the referentially transparent imperative program in the post."
I assume you are referring to where you wrote in the article, "We can also do “OO” in Haskell...".
a) Your example is using a monad (with 'do' syntactical sugar to obscure the nested 'bind' calls) to abstract side effects. Although abstraction is an "OO" concept, I assume your point is that purely functional can be written in an imperative style. Let's not conflate "OO" with imperative here, as OOP is orthogonal (not mutually exclusive) to pure FP, i.e. they can be intermixed without a contradiction. Whereas, imperative programming can not be mixed with FP (i.e. not mutually inclusive) without causing some level of the composition to be impure.
b) That example is imperative and not purely functional, because it has side-effects (e.g. printing to stdout). Yes, you can force Haskell to do impure imperative code.
Whereas, for example the State monad can be used to simulate global state in a purely functional (non-imperative) paradigm, i.e. only the pure functions in the composition that access the state are imperatively composed (order-dependent), and pure functions (that do not access the state) remain order-independent, when composed with the former using 'bind'. Here is an excerpt from my work on State monad which is not yet uploaded to my site.
// When exclusively not composing impure morphism functions that operate on state (i.e. that do not return a copy of modified state),
// the state monad composition of morphism functions (using 'bind' and 'map'),
// isolates the order-dependency drawback of global state to those pure morphism functions that operate on state,
// and any composed pure morphism functions, that do not input state, remain order-independent.
// Thus the state monad is a granular composition concurrent programming paradigm.
"This is an imperative program with no side-effects."
No, it is a pure program.
That is why I asked you what your definition of "imperatives" is, because it seems your eyes are fooled by the do syntactical sugar. The types of the pure functions determines whether they are composed purely or imperatively, not the ordering of the syntactical sugar. That composition flexibility is the beauty of the State monad, as I explained already.
"You talk a lot and say little."
I am thinking you did not yet have the epiphany, so you did not absorb what I wrote. Let me try to clarify below.
"I think you’re conflating “order-of-execution” with ordinary causal sequencing. In the expression f (g x), is it important that we execute g x before applying f to it?"
No, I covered that already in my prior reply. Concurrency in the pure lambda calculus is only possible when we reuse functions more than once. The key is that order is not forced when there is reuse, we can calculate the same function concurrently or in opposite order, e.g.
f( g x )( g y ) // or for those who prefer the notation f( g( x ), g( y ) )
"The sequencing isn’t imposed by any side-effects. It’s clear when you look at the type of (>>=) :: m a -> (a -> m b) -> m b. The second action requires the a from the first action (of type m a)"
You are confusing the purity of the functions being composed with the purity of the composition. Indeed the the types of the functions determines whether they are dependent on the prior 'bind' call, but some uses of a monad, a = b. For those cases, the composition is pure and not imperative.
As I said above, do not be fooled by the do syntactical sugar, the types of the composition control whether the composition is pure or imperative. Pure and imperative are opposites. Please try to understand.
Also I forgot to say that printing to the screen (twice) is order dependent. I think you agree there is order-dependence in the example in your article.
I am in rush here and I misstated about a = b. The only relevant point is that the types of the morphism function determine if the composition is pure or imperative. I will try to follow up with a clear example later..
Haha. Thanks.
Blame God for Godel's second incompleteness theorem for the inability to declare a consistent theory of the barrier between pure and imperative.
Pureness is relative (to lack of imperative), like everything else existential. Read Hume.
Impure composition of a pure API can render the order-independence within the pure code imperative relative to the external state. If I accepted your notion of imperative, then everything is imperative, and there is no such thing as pure code. Rather I place the barrier at the border between the pure API and the impure composition, which is deterministic relative to static typing.
You place the barrier for some arbitrary reason on pure monadic composition, which orders the monad state purely, which is not conceptually different than how normal function composition orders the final return value (which could also be considered a state and composed impurely).
I think I had a mistake. As long as the State monad is pure, the imperative code is not the monadic binding of functions that access the state, but rather any impure composition of the resultant state. The IO monad is obviously impure as soon as it calls a system function.
Continuation from the last post on prior page of this thread
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4561
It was censored, but I have a copy here.
I read "Tackling the Awkward Squad" again, and it clearly says at the end of section 2 and 7, that where the IO is used, the program is imperative and impure. So I am not disagreeing with that research paper by Peyton-Jones, who is one of the principle creators of Haskell. He makes the point that IO should normally be at the top-level functions which call into a core of pure functions. And the point of the IO type is to delineate the impure, imperative portions of the code in a language that is otherwise always pure. So when these types are used, Haskell is not a 100% pure, declarative language. And that is a good thing. This is just Haskell's way of mixing pure and impure code.
The World state is not reversible in time, because it is impossible to cache the infinite states of the World for each discrete time (besides time is continuous which is another reason there are infinite states). Haskell does not allow the World in "IO a" to be reversed in time, i.e. all action functions occur in forward time order and thus the functions that input IO are imperative and no longer declarative. Thus, this restriction on order does make the action functions pure, because a pure function must be reversible, because irreversibly is a side-effect. For example, you write some output to the World, then you read some input which depends on the user having the output before making the input. Thus the output is a global side-effect in the World that impacts the input. Contrast this with functional reactive programming, which allows inputs and outputs to occur in any order, i.e. it is declarative and order-independent. Haskell can do pure functional reactive programming, and to interface a stateful world, every functional reactive program will need an impure, imperative "wrapper", which is what IO is.
See also my comments on this page, Pure Functional Language: Haskell
=========================
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4566
Ah I see he uncensored my posts above.
I want to share my impromptu thoughts about why pure FP might be considered declarative. Although there is a structure in function composition which implies a logical order-of-operations, the purity allows the order-of-execution to deviate without any impact on the final result. Structure with order-of-execution independence is one of the general characteristics of a declarative paradigm; whereas, the nature of an imperative program is that the final state depends on the order-of-execution, because the lack of purity means that function state transitions can not be isolated from each other. Seems to be a slam dunk against Rúnar's confusion of terms. Maybe he realized it too:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4570
===========================
http://kawagner.blogspot.com/2007/02/why-monads-are-evil.html?showComment=1311410012629#c7474729809428588396
Shelby wrote:
It was censored, but I have a copy here.
Definitions:
'pure' = referentially transparent
'impure' = 'imperative' = not referentially transparent = referentially opaque
Rúnar said:In my view “imperative” = “monadic”
Thus, you are claiming that 'imperative' = all pure function composition, h : (a -> b) -> (b -> c) -> a -> c; h f g x = f( g x ). Thus you are claiming that 'imperative' is not distinguishable from pure FP.
The proof for the state monad, is the morphism function m : a -> State s b; m t = \x -> g (t,x), may call a function g : (a,s) -> (b,s). The monadic binding is pure function composition on the tuple, e.g. f( g (a,x) ).
That the ordering of pure function composition can produce a final result of type c or (c,s), is irrelevant to the concept of 'imperative'.No it isn’t.
My initial succinct comment was correct, 'imperative' is the antithesis of purity-- there is no other definition that gives imperative any meaning. My followup reply correctly predicted that your definition of imperative is muddled.
My further verbose replies were an attempt to try to understand how you could distinquish some forms of ordering of pure functional composition from others. I was trying to understand and explain what significance you could apply to the abstraction of a monad. The monad abstracts (wraps) some state or computation, so that it is possible to compose functions which map unwrapped types, a -> b, with those that map wrapped types, a -> m a. I explained that for example for the state monad, the composed functions of type, a -> b, do not operate on the state, thus they certainly can't be 'imperative' relative to the state. And as I proved above, the composition of functions of type, a -> State s b, is also pure function composition returning a tuple. You try to make a distinction that monadic ordering determines the result tuple, but the ordering of any pure function composition determines the result. There is no distinction.
The bottom line is that 'imperative' is any function that is not referentially transparent. Period. No corner cases, no muddled definition that has no meaning.
In my next comment, I will explain why the IO monad is impure and thus 'imperative', even in Haskell.
The IO monadic composition in Haskell is claimed[1] to be pure because the World (as abstracted by IO a), and not an instance of type a, is an input and output for each action function, i.e. it is assumed that for a given state of the potentially infinite state-machine in the World, the same World state will always be returned[2]. The problem here is that the World state includes time, and thus it never has the same state. This is what I was referring to when I mentioned Coase's and Godel's theorems. The lambda calculus requires the function to be beta-reducible[3], but the structure of the World is never considered by the Haskell type checker[4][5]. Thus this is a hack to claim[1] that side-effects don't matter to purity, by assuming that the World input type of the of IO action function is computable, but never having the compiler check its infinite structure. In short, the World's structure is uncomputable, which relates to undecidability and the Halting theorem.
So the IO monad is actually impure (i.e. 'imperative') and Haskell is lying about it[5]. There is analogy to virtual methods, which at compile-time comply with the variance of Liskov Substitution Principle for the method parameters and return types, but fail at run-time the pre- and post-condition behavioral semantics of LSP (which is unchecked by most languages). Scala makes it easier to lie about purity, because its type system does not even check for purity, so any kernel function that interacts with the world, is impure.
For a language that allows the declaration of a function's type to include whether it is pure or not (e.g. Copute), as I previously stated, the barrier between 'imperative' (i.e. 'impure') and 'pure' is the determined by the type of the function (the API).
[1] - [5] The links for the footnotes follow in the next reply.
[1] Section 7, Tackling the Awkward Squad, Peyton-Jones, "this is an important caveat...I have not proved it"
[2] Section 2.2, Tackling the Awkward Squad, Peyton-Jones
[3] http://en.wikipedia.org/w/index.php?title=Lambda_calculus&oldid=433678424#Computable_functions_and_lambda_calculus
[4] http://www.reddit.com/r/haskell/comments/8bhir/why_the_io_monad_isnt_a_dirty_hack/c08s5bc
[5] http://kawagner.blogspot.com/2007/02/why-monads-are-evil.html?showComment=1170781260000#c2061819858844484763
This is my final post to this blog page. I hope this helps you see the light.
I read "Tackling the Awkward Squad" again, and it clearly says at the end of section 2 and 7, that where the IO is used, the program is imperative and impure. So I am not disagreeing with that research paper by Peyton-Jones, who is one of the principle creators of Haskell. He makes the point that IO should normally be at the top-level functions which call into a core of pure functions. And the point of the IO type is to delineate the impure, imperative portions of the code in a language that is otherwise always pure. So when these types are used, Haskell is not a 100% pure, declarative language. And that is a good thing. This is just Haskell's way of mixing pure and impure code.
The World state is not reversible in time, because it is impossible to cache the infinite states of the World for each discrete time (besides time is continuous which is another reason there are infinite states). Haskell does not allow the World in "IO a" to be reversed in time, i.e. all action functions occur in forward time order and thus the functions that input IO are imperative and no longer declarative. Thus, this restriction on order does make the action functions pure, because a pure function must be reversible, because irreversibly is a side-effect. For example, you write some output to the World, then you read some input which depends on the user having the output before making the input. Thus the output is a global side-effect in the World that impacts the input. Contrast this with functional reactive programming, which allows inputs and outputs to occur in any order, i.e. it is declarative and order-independent. Haskell can do pure functional reactive programming, and to interface a stateful world, every functional reactive program will need an impure, imperative "wrapper", which is what IO is.
See also my comments on this page, Pure Functional Language: Haskell
=========================
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4566
Okay since you censored my last 2 posts, then I will send you this message via your blog, since I can not seem to find your contact email address any where.
Rúnar wrote:
"Imperative programming is just Kleisli composition"
John Hughes apparently says you are wrong. I assume you know he helped create Haskell, and is very prominent in the field of FP:
http://www.cse.chalmers.se/~rjmh/afp-arrows.pdf
Section 1.3 Arrows as Computations, Programming with Arrows
Hughes wrote:
"monads..., help to structure purely functional code, with not an imperative operation in sight"
Note I am publishing all of the censored comments to the web, including those censored by Tony Morris over at his blog. And I am aware of your remark at Twitter:
http://twitter.com/#!/runarorama/status/84026046692851712
"Feeding a known troll: http://t.co/2ezUkL8 "
Enjoy the bliss.
Ah I see he uncensored my posts above.
Rúnar, thank you for uncensoring my 2 posts.
Let me try to close our discussion amicably.The noise-to-signal ratio in your comments exceeds my tolerance.
You added that. My verbose middle comments contained some errors (perhaps due to exhaustion made after midnight at end of 18 hour work day in an Asian internet cafe with multiple patrons simultaneously loudly singing different songs for hours, also fighting an amoeba taking flagyl, and I have 1 eye). The last 4 comments were written after I got some limited sleep (woken by a noisy parade).
I just realized why we were talking past each other.
Imperative means non-declarative, e.g. no "control flow". The ambiguity is in the definition of "control flow". A pure function creates a state change by creating new state, and not changing the input state. Thus, apparently some claim that pure FP is declarative, thus impurity is (a subset of) imperative.
I could accept that definition. Alternatively, I already implied early in our discussion that I could agree that all non-declarative programming is imperative.
Rúnar, are you saying that only Kleisli composition is imperative? Or, are we in agreement that monadic composition generalizes to normal pure function composition, and thus all pure function composition would be imperative, where you said "state transition" (i.e. state creation) is imperative?
Pure function composition is ordered and causes state creation ("transition") whether it be in a monad, Kleisli, or otherwise. Your apparent (?) distinction on monadic and Kleisli lead me to believe that you were saying only some FP composition is imperative.
For novice readers, the Kleisli arrow is the function, a -> m b. The arrow "->" is the general function. A function is either impure or pure, thus either changes or creates state respectively. Some forms of declarative programming (e.g. HTML, CSS, etc) is stateless, and (if I remember correctly) this is possible because they are not Turing complete.
I want to share my impromptu thoughts about why pure FP might be considered declarative. Although there is a structure in function composition which implies a logical order-of-operations, the purity allows the order-of-execution to deviate without any impact on the final result. Structure with order-of-execution independence is one of the general characteristics of a declarative paradigm; whereas, the nature of an imperative program is that the final state depends on the order-of-execution, because the lack of purity means that function state transitions can not be isolated from each other. Seems to be a slam dunk against Rúnar's confusion of terms. Maybe he realized it too:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4570
===========================
http://kawagner.blogspot.com/2007/02/why-monads-are-evil.html?showComment=1311410012629#c7474729809428588396
Shelby wrote:
I have refined my understand, which is explained at the following link:
https://goldwetrust.forumotion.com/t112p180-computers#4434
As Peyton-Jones elucidates in "Tackling the Awkward Squad", IO Monad is simply (one of) Haskell's way(s) of mixing impure, imperative code with pure, declarative code. To the extent that you can make your interface to the world declarative, i.e. pure functional (reactive), then you don't use IO Monad. But at least until the world (i.e. the OS) becomes pure functional, then we need a "wrapper" on our pure functional core-- which is IO Monad.
I gained clarity in the summary of my current open source project, which goes into more detail on this issue:
http://Copute.com
Last edited by Shelby on Fri Dec 09, 2011 10:02 pm; edited 11 times in total
Software engineering is a living organism (why we need Copute!)
http://elegantcode.com/2011/06/22/why-software-development-will-never-be-engineering/
The analogy was to why online poker skills accelerated in past 5 years, was due to more interaction and human involvement, i.e. lower the barriers to COOPeration (note I also own COOPute.com). This is exactly what I am trying to do for software evolution by creating Copute, with its economic and technical models.
I replied to a comment:
http://elegantcode.com/2011/06/22/why-software-development-will-never-be-engineering/#comment-234084180
http://esr.ibiblio.org/?p=3345#comment-311334
Exactly why I am creating Copute. The maximum division-of-labor and thus specialization is only possible if code is more reusable and smaller orthogonal modules, so that programmers can specialize and have their efforts reused in different problem domains.
P.S. Daniel's Biblical "travel & knowledge will increase" pops up again, in this online poker (increasing travel virtually) to knowledge progress analogy.
The nature of software development, [...], leads itself to rapid evolution.
Consider what direction software development is evolving. Is it even evolving in the direction of rigid engineering practices or is it evolving in the exact OPPOSITE direction?
Ten years ago, we tried to use UML diagrams and CASE tools to develop software. Ten years ago waterfall was all the rage. Ten years ago, we thought that ten years in the future we would have programs that would allow us to build software in the same way that CAD tools allow for building machine parts.
Not only did it not happen. It went completely the other way. Now we are all talking about Agile. Now people don’t even remember what CASE tools are. Now we are building software without trying to define the whole system in UML diagrams.
The fact of the matter is software systems are unruly beasts!
The analogy was to why online poker skills accelerated in past 5 years, was due to more interaction and human involvement, i.e. lower the barriers to COOPeration (note I also own COOPute.com). This is exactly what I am trying to do for software evolution by creating Copute, with its economic and technical models.
I replied to a comment:
http://elegantcode.com/2011/06/22/why-software-development-will-never-be-engineering/#comment-234084180
The difference between software and the brigde builder is that you can calculate if a bridge is stable and can support the weight and withstand the wind... I never could prove a piece of software to be correct with mathmatics. Just by using it and see if it didn't cause errors.
There are formal semantics for software in some languages, but your point is valid in the sense that the number of variables that have to be fit in many software, are greater and more importantly they are changing more rapidly. So it is not that we don't have equational foundations increasingly used in practical software engineering at least by those on leading edge of software engineering (e.g. category theory, monads, programming paradigms, agile practices, agile enforcing compilers, formal logic, pure lambda calculus and the referentially transparency, behavioral static typing, etc), it is that the dynamic complexity of variables in software is insatiable, because the better our software, then the more variables of demand are created by the new possibilities revealed. Software is unique from other engineering disciplines in that it is applicable to all of them.
Software is the encoding of human knowledge, which is always progressing.
There are some important equational foundations that we need to popularize so that competitive cooperation on software can accelerate.
http://esr.ibiblio.org/?p=3345#comment-311334
There’s something that many commenters have alluded to, but didn’t come out directly and say:
An engineer will specialize and become expert in some field, like bridge-building. The days when an engineer like Brunel could design bridges and ships and… are long gone.
Software developers know how to develop software, but they come to a big project with no knowlege of the problem domain. They have to learn that on-the-fly, gaining experience by making mistakes. Still worse, the people who commissioned the project in the first place don’t know what they really want, and find problems communicating with the software people. This is much harder than telling an engineering firm that you need a bridge from here to there, to carry so much traffic, and they provide a solution.
OTOH, the above is not all bad. Software’s inherent flexibility allows the business to explore solutions that they might otherwise not be aware of (if they have the nerve to keep funding the project). Trying out a suspension bridge, then a cantilever, then a simple truss design by building them all wouldn’t fly in the engineering world; the engineer would be expected to do that on paper and propose the best one.
Exactly why I am creating Copute. The maximum division-of-labor and thus specialization is only possible if code is more reusable and smaller orthogonal modules, so that programmers can specialize and have their efforts reused in different problem domains.
P.S. Daniel's Biblical "travel & knowledge will increase" pops up again, in this online poker (increasing travel virtually) to knowledge progress analogy.
Working on a new Introduction to Copute (also technical rationale)
Copute is a new computer language which aims to correct the mistakes and shortcomings in other programming languages, which have limited the scale at which software can be built from reusable modules.
Fitness
Degrees-of-freedom (DOF) dictates how well a system can adapt to cooperate and fit to a desired solution. For example, there would be gaps (i.e. errors in fitness) between a bicycle chain and the curved shape it wraps around, because the chain can only bend at the links. Employing instead a solid, but flexible metal strip, the metal would remain fit to the curve only with a sustained force-- the resisting force being a reduced DOF and an error in fitness. Permanent bending reduces DOF for wrapping to different curves.
Google can't even find a wise proverb, "if you use your power, then you lose it". The equivalent form is probably popular because it is misunderstood, "absolute power corrupts absolutely". The utility of power exists only because there are resistance forces. This generative fact applies to everything. For example, the power vacuum that gives rise to central authority is due to the vested interests, bickering, selfishness, and complacency (laziness) of the governed.
The proof is given by the well established fundamental physics equations. Power is the rate at which work can be completed. Power requires energy to produce work, but the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter.[1] For example, a compressed spring wants to push back and undo the work.
So if our goal is to get more configurations of in our system with less work, then we need to reduce these resistance forces, i.e. increase the DOF. Think of springs opposing movement in numerous directions, and we want to remove them. Thus, if we succeed to increase DOF, it takes less work to produce our desired diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from our work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what we want and faster, with a greater longevity, is thus more powerful (efficient) and explains the wisdom of the forgotten proverb.
Knowledge
Communication redundance (amplitude) is a form of power, because its utility exists due to the resistance to comprehension, i.e. due to noise mixed with the signal. The signal-to-noise ratio (SNR) depends on the DOF of both the sender and the receiver.
The difference between signal and noise, is the mutual comprehension (i.e. resonance) between the sender and the receiver, i.e. noise can become a signal or vice versa, depending on the coupling. In physics, resonance is the lack of resistance to the change in a particular configuration of the subject matter, i.e. each resonator is a DOF.[2] DOF is the number of potential orthogonal (independent) configurations, i.e. the ability to obtain a configuration without impacting the ability to obtain another configuration. In short, DOF are the configurations that don't have dependencies on each other.
Thus increasing the number of independent configurations in any system, makes the system more powerful, requiring less energy, work, and power, to obtain diversity within the system. The second law of thermodynamics says that the universe is trending to maximum entropy, i.e. the maximum independent configurations. Entropy (disorder) is a measure of the relative number of independent possibilities. So now we see why Coase's theorem holds, that the universe is trending towards the maximum free market, and any cost barrier will fail eventually. This is why small things grow faster, because large things reduce the number of independent configurations and thus require exponentially more power to grow. Thus in the point-of-view of change and the future, small things are exponentially more powerful than large things. The bell curve exists because the minority is exponentially more efficient (more DOF), and because a perfectly equal distribution would require infinite DOF and the end of the universe's trend to maximum disorder. Large things stagnate, rot, collapse, and disappear.
Knowledge is correlated to the DOF, because in every definition of knowledge one can think of, an increase in knowledge is an increase in DOF and vice versa. Software is unique among the engineering disciplines in that it is applicable to all of them. Software is the process of increasing knowledge. Thus one of the most critical characteristics of software is that it does not want to be static, and that the larger the components, thus the fewer the DOF, the less powerful (efficient) the software process.
Copute attempts to apply these concepts to software, by increasing the independence (orthogonality) of software components, so they can be composed into diverse configurations with less work. Independent programmers can leverage independent components with less communication and refactoring work, thus Copute is about increasing cooperation.
Below we will outline the major features of Copute which enable independent software components.
Pure Functional Programming
Pure functional programming (PFP) is fundamentally about not repeating oneself, i.e. the ability to compose (small) functions more freely, because it has following features:
PFP is never imperative, and is always declarative. In general, declarative languages (e.g. HTML, CSS) express the logic of the system, without forcing a particular order-of-execution. Declarative is one of the features required for concurrency. Due to purity, each the state transition for each function in PFP is independent (orthogonal) to all the others and to the order-of-execution, thus do not inhibit DOF as imperative programming does. Some people mistakenly state that PFP is imperative, but these are cases where a particular language compiler fudges (e.g. IO monad in Haskell[3]) or does not check the purity of a construct, to give apparency of purity within the semantics of the type system of the language, but is imperative in the semantics of state external to that type system.
PFP is mutually inclusive to OOP, not mutually exclusive. PFP and OOP are duals in that they each can be used to solve the Expression Problem for maximum DOF.[4] And category theory reveals that PFP and OOP are composable as follows.
Although the DOF benefits derive from math, the composition benefits of PFP are intuitive because every programmer understands how to do composition of functions. The (category theory models of) higher-level object subtypes (OOP), can be composed with ordinary functions-- functions which input type T and return type A-- without having to write specialized versions of these functions for each possible subtype we create. The following OOP models are simply composing as if they are (and thus resonating with) ordinary functions. Read the Copute documentation for each of the following models to aid in understanding the following table.
Object Oriented Programming
Copute builds on several critical Scala innovations, such as higher-kinded generics, generic type parameter variance annotations, multiple inheritance of interface and implementation, and pattern matching with abstract data types (case classes). Copute adds a few critical refinements to the single-point-of-truth (SPOT), which drastically improves the DOF, conciseness, and intuitive understanding.
TODO: discuss single-point-of-truth. Also discuss application of granular purity to OOP.
[1] http://en.wikipedia.org/w/index.php?title=Energy&oldid=435292864
[2] http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Resonators
http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Mechanical_and_acoustic_resonance
[3] https://goldwetrust.forumotion.com/t112p180-computers#4434
[4] http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4537
Fitness
Degrees-of-freedom (DOF) dictates how well a system can adapt to cooperate and fit to a desired solution. For example, there would be gaps (i.e. errors in fitness) between a bicycle chain and the curved shape it wraps around, because the chain can only bend at the links. Employing instead a solid, but flexible metal strip, the metal would remain fit to the curve only with a sustained force-- the resisting force being a reduced DOF and an error in fitness. Permanent bending reduces DOF for wrapping to different curves.
Google can't even find a wise proverb, "if you use your power, then you lose it". The equivalent form is probably popular because it is misunderstood, "absolute power corrupts absolutely". The utility of power exists only because there are resistance forces. This generative fact applies to everything. For example, the power vacuum that gives rise to central authority is due to the vested interests, bickering, selfishness, and complacency (laziness) of the governed.
The proof is given by the well established fundamental physics equations. Power is the rate at which work can be completed. Power requires energy to produce work, but the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter.[1] For example, a compressed spring wants to push back and undo the work.
So if our goal is to get more configurations of in our system with less work, then we need to reduce these resistance forces, i.e. increase the DOF. Think of springs opposing movement in numerous directions, and we want to remove them. Thus, if we succeed to increase DOF, it takes less work to produce our desired diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from our work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what we want and faster, with a greater longevity, is thus more powerful (efficient) and explains the wisdom of the forgotten proverb.
Knowledge
Communication redundance (amplitude) is a form of power, because its utility exists due to the resistance to comprehension, i.e. due to noise mixed with the signal. The signal-to-noise ratio (SNR) depends on the DOF of both the sender and the receiver.
The difference between signal and noise, is the mutual comprehension (i.e. resonance) between the sender and the receiver, i.e. noise can become a signal or vice versa, depending on the coupling. In physics, resonance is the lack of resistance to the change in a particular configuration of the subject matter, i.e. each resonator is a DOF.[2] DOF is the number of potential orthogonal (independent) configurations, i.e. the ability to obtain a configuration without impacting the ability to obtain another configuration. In short, DOF are the configurations that don't have dependencies on each other.
Thus increasing the number of independent configurations in any system, makes the system more powerful, requiring less energy, work, and power, to obtain diversity within the system. The second law of thermodynamics says that the universe is trending to maximum entropy, i.e. the maximum independent configurations. Entropy (disorder) is a measure of the relative number of independent possibilities. So now we see why Coase's theorem holds, that the universe is trending towards the maximum free market, and any cost barrier will fail eventually. This is why small things grow faster, because large things reduce the number of independent configurations and thus require exponentially more power to grow. Thus in the point-of-view of change and the future, small things are exponentially more powerful than large things. The bell curve exists because the minority is exponentially more efficient (more DOF), and because a perfectly equal distribution would require infinite DOF and the end of the universe's trend to maximum disorder. Large things stagnate, rot, collapse, and disappear.
Knowledge is correlated to the DOF, because in every definition of knowledge one can think of, an increase in knowledge is an increase in DOF and vice versa. Software is unique among the engineering disciplines in that it is applicable to all of them. Software is the process of increasing knowledge. Thus one of the most critical characteristics of software is that it does not want to be static, and that the larger the components, thus the fewer the DOF, the less powerful (efficient) the software process.
Copute attempts to apply these concepts to software, by increasing the independence (orthogonality) of software components, so they can be composed into diverse configurations with less work. Independent programmers can leverage independent components with less communication and refactoring work, thus Copute is about increasing cooperation.
Below we will outline the major features of Copute which enable independent software components.
Pure Functional Programming
Pure functional programming (PFP) is fundamentally about not repeating oneself, i.e. the ability to compose (small) functions more freely, because it has following features:
- Pure: for each value for the input, a function can be replaced by a previously cached value of the result, i.e. the function doesn't depend on any external state.
- Higher-order: a function can be a value.
- Curried: a new function can can be created by assigning a constant value to an input of an existing function.
- Declarative: function composition expresses logical order-of-operations, but due to the replacement principle for purity, the order-of-execution is independent.
- Mathematical: lambda calculus is PFP, which allows us to use category theory to maximize DOF.
PFP is never imperative, and is always declarative. In general, declarative languages (e.g. HTML, CSS) express the logic of the system, without forcing a particular order-of-execution. Declarative is one of the features required for concurrency. Due to purity, each the state transition for each function in PFP is independent (orthogonal) to all the others and to the order-of-execution, thus do not inhibit DOF as imperative programming does. Some people mistakenly state that PFP is imperative, but these are cases where a particular language compiler fudges (e.g. IO monad in Haskell[3]) or does not check the purity of a construct, to give apparency of purity within the semantics of the type system of the language, but is imperative in the semantics of state external to that type system.
PFP is mutually inclusive to OOP, not mutually exclusive. PFP and OOP are duals in that they each can be used to solve the Expression Problem for maximum DOF.[4] And category theory reveals that PFP and OOP are composable as follows.
Although the DOF benefits derive from math, the composition benefits of PFP are intuitive because every programmer understands how to do composition of functions. The (category theory models of) higher-level object subtypes (OOP), can be composed with ordinary functions-- functions which input type T and return type A-- without having to write specialized versions of these functions for each possible subtype we create. The following OOP models are simply composing as if they are (and thus resonating with) ordinary functions. Read the Copute documentation for each of the following models to aid in understanding the following table.
Object Oriented Programming
Copute builds on several critical Scala innovations, such as higher-kinded generics, generic type parameter variance annotations, multiple inheritance of interface and implementation, and pattern matching with abstract data types (case classes). Copute adds a few critical refinements to the single-point-of-truth (SPOT), which drastically improves the DOF, conciseness, and intuitive understanding.
- Purity
- Unreferencable implementation
- No virtual re-implementation
- Static methods instead of implicits
- Disobscurity: e.g. one way to write a function instead of 6, no methods as operators, no.
- No exceptions
TODO: discuss single-point-of-truth. Also discuss application of granular purity to OOP.
[1] http://en.wikipedia.org/w/index.php?title=Energy&oldid=435292864
Stored energy is created whenever a particle has been moved through a field it interacts with (requiring a force to do so), but the energy to accomplish this is stored as a new position of the particles in the field—a configuration that must be "held" or fixed by a different type of force (otherwise, the new configuration would resolve itself by the field pushing or pulling the particle back toward its previous position). This type of energy "stored" by force-fields and particles that have been forced into a new physical configuration in the field by doing work on them by another system, is referred to as potential energy. A simple example of potential energy is the work needed to lift an object in a gravity field, up to a support.
[2] http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Resonators
A physical system can have as many resonant frequencies as it has degrees of freedom.
http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Mechanical_and_acoustic_resonance
Mechanical resonance is the tendency of a mechanical system to absorb more energy [i.e. less resistance] when the frequency of its oscillations [i.e. change in configuration] matches the system's natural frequency of vibration [i.e. natural change in configuration] than it does at other frequencies.
[3] https://goldwetrust.forumotion.com/t112p180-computers#4434
[4] http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4537
Last edited by Shelby on Mon Jul 25, 2011 10:07 am; edited 4 times in total
Efficiency of purely functional programming
http://twitter.com/#!/kmett/status/74517454973444096
http://stackoverflow.com/questions/1990464/efficiency-of-purely-functional-programming
Array access explains the logarithmic cost factor:
http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/1268285#1268285
I'm becoming skeptical about the claim that pure functional is generally log n slower:
https://goldwetrust.forumotion.com/t112p195-computers#4537
Discussion of tradeoffs for lazy programming:
https://goldwetrust.forumotion.com/t112p165-computers#4430
Going immutable in a strict language can cost you a logarithmic factor, but you get free persistence http://bit.ly/6T6lfr
http://stackoverflow.com/questions/1990464/efficiency-of-purely-functional-programming
Array access explains the logarithmic cost factor:
http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/1268285#1268285
I'm becoming skeptical about the claim that pure functional is generally log n slower:
https://goldwetrust.forumotion.com/t112p195-computers#4537
Discussion of tradeoffs for lazy programming:
https://goldwetrust.forumotion.com/t112p165-computers#4430
Last edited by Shelby on Thu Aug 25, 2011 5:58 pm; edited 3 times in total
BitCoin is socialist and not (and no online curency can be) anonymous
http://www.bitcoin.org/
The fundamental problem is that any online currency either requires a centralized trust authority which is subject to govt regulation, or a majority vote of decentralized authorities which is just socialism and the antithesis of what makes gold and silver money (he who holds the gold makes the rules, no third party authority or majority vote). I had come to this conclusion a long time ago in the Precious Metals forum during my analysis of whether an online currency (potentially backed by gold and silver) could survive govt retribution.
BitCoin can be defeated by a hacker who puts a virus on a majority of computers using BitCoin, or a govt legislation that requires every computer sold to run a firmware program that defeats it (even if a minority will remove the firmware illicitly):
http://forum.bitcoin.org/index.php?topic=12435.msg172811#msg172811
Thus the BitCoin algorithm has an attack vector, which encourages govt regulation of our computers.
No anonymity on the internet:
http://en.wikipedia.org/wiki/Bitcoin#Anonymity
============================
I should also be frank with my friends.
Spoken like a good collectivist:
I suppose you didn't realize that you were also outlawing the 2nd law of thermodynamics too. But I am not going to explain that one to you, because you will NEVER get it.
Btw, Bitcoin is vulnerable to attack by a government law which says every computer must contain a chip which the government can use to hijack the computer for purposes of regulating peer-to-peer protocols. This is because for money to gain enough supply to be viable, the protocol has to be fairly well known and static.
The fundamental problem is that any online currency either requires a centralized trust authority which is subject to govt regulation, or a majority vote of decentralized authorities which is just socialism and the antithesis of what makes gold and silver money (he who holds the gold makes the rules, no third party authority or majority vote). I had come to this conclusion a long time ago in the Precious Metals forum during my analysis of whether an online currency (potentially backed by gold and silver) could survive govt retribution.
BitCoin can be defeated by a hacker who puts a virus on a majority of computers using BitCoin, or a govt legislation that requires every computer sold to run a firmware program that defeats it (even if a minority will remove the firmware illicitly):
http://forum.bitcoin.org/index.php?topic=12435.msg172811#msg172811
Thus the BitCoin algorithm has an attack vector, which encourages govt regulation of our computers.
No anonymity on the internet:
http://en.wikipedia.org/wiki/Bitcoin#Anonymity
============================
I should also be frank with my friends.
...at FinancialSense.com. The topic of Bitcoin, helped me find direction for it.
Spoken like a good collectivist:
Government debt should be settled, and outlawed going forward. A balanced budget amendment to the Constitution should be adopted.
I suppose you didn't realize that you were also outlawing the 2nd law of thermodynamics too. But I am not going to explain that one to you, because you will NEVER get it.
Btw, Bitcoin is vulnerable to attack by a government law which says every computer must contain a chip which the government can use to hijack the computer for purposes of regulating peer-to-peer protocols. This is because for money to gain enough supply to be viable, the protocol has to be fairly well known and static.
Last edited by Shelby on Tue Sep 20, 2011 11:15 pm; edited 2 times in total
Your neighborly Google, whose slogan is "Do no evil"
Your neighborly Google, whose slogan is "Do no evil":
http://www.stuff.co.nz/technology/digital-living/5217945/Google-Wi-Fi-snooping-lawsuits-can-proceed
http://www.stuff.co.nz/technology/digital-living/5217945/Google-Wi-Fi-snooping-lawsuits-can-proceed
Computers create heat and no potential energy
Note this applies to the hardware perspective, while in the software dimension, there is potential energy created (which is why software decays over time relative to the changing applications).
http://www.fibertown.com/pdf/uptimeinstituteheatdensitytrends.pdf
So this means that the work being done is not creating potential energy (sustained resistance forces), but rather permanently bending the structures of resistance, thus releasing heat (heat is disordered matter, that is not useful, i.e. not "signal"). See the quotes of what I had previously about DOF. Remember that "work" is noise, and the information transmitted (resonated between sender & receiver) is the "signal". We want to eliminate "work" and increase information.
In a computer, these bending resistance forces are stored due to capacitance, i.e. the ability to store a charge of electrons with an insulator, and the fact that a transistor is a variable resistor (insulator) controlled by a charge applied to it. These resistance forces are reduced by making the transistors smaller, which is what causes Moore's Law (we can double the # of transistors on a silicon die every 18 months).
My previous treatise on physics and information:
https://goldwetrust.forumotion.com/t112p180-computers#4436
http://www.fibertown.com/pdf/uptimeinstituteheatdensitytrends.pdf
Power consumed by computer and communication products is totally converted to heat.
So this means that the work being done is not creating potential energy (sustained resistance forces), but rather permanently bending the structures of resistance, thus releasing heat (heat is disordered matter, that is not useful, i.e. not "signal"). See the quotes of what I had previously about DOF. Remember that "work" is noise, and the information transmitted (resonated between sender & receiver) is the "signal". We want to eliminate "work" and increase information.
In a computer, these bending resistance forces are stored due to capacitance, i.e. the ability to store a charge of electrons with an insulator, and the fact that a transistor is a variable resistor (insulator) controlled by a charge applied to it. These resistance forces are reduced by making the transistors smaller, which is what causes Moore's Law (we can double the # of transistors on a silicon die every 18 months).
My previous treatise on physics and information:
https://goldwetrust.forumotion.com/t112p180-computers#4436
Shelby wrote:[...]
Employing instead a solid, but flexible metal strip, the metal would remain fit to the curve only with a sustained force-- the resisting force being a reduced DOF and an error in fitness. Permanent bending reduces DOF for wrapping to different curve
[...]
The proof is given by the well established fundamental physics equations. Power is the rate at which work can be completed. Power requires energy to produce work, but the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter.[1] For example, a compressed spring wants to push back and undo the work.
So if our goal is to get more configurations of in our system with less work, then we need to reduce these resistance forces, i.e. increase the DOF. Think of springs opposing movement in numerous directions, and we want to remove them. Thus, if we succeed to increase DOF, it takes less work to produce our desired diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from our work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what we want and faster, with a greater longevity, is thus more powerful (efficient) [...]
Versioning, forking, bug fixes, etc... in Copute.com
THIS IS NO LONGER THE MODEL.
Please note that this is a thought experiment and it may not be implemented for Copute. There is ongoing thinking on how to obtain the best free market pricing model. Preferably it should be entirely decentralized.
Some private writings solved most of the issues in the Copute.com economic model. The remaining issue is modifications to existing modules.
Summary of Economic Model
First a brief summary of prior two posts. Copute will be a new open source (free) computer language that fosters reusability of modules of programming (to finer granularity of module size than with other languages). Any one can use Copute for free, even modify it, and there is no requirement to use the Copute.com marketplace when using the Copute language. In addition, Copute.com will offer a non-exclusive (same modules can be offered else where, even offered for free) marketplace for programmers to offer their modules to other developers, so that duplication of programming effort (modules) can be reduced. Any derivative software product created from these Copute.com modules will pay the greater of 1% of gross sales, or 10% of salaries[1] to develop, that derivative software product. Copute.com will distribute these royalties to module owners (after deducting Copute.com's fee, perhaps 1% of the royalties) proportionally based on an automated statistically sampled profiling of the derivative software in use to determine the percent of time the CPU spends in each module.
Copute.com won't have a monopoly, because any other site can offer a similar marketplace for Copute modules (or even modules of any computer language, although the Copute language will be best for reusing modules). This marketplace is to make modules fungible and the importance of this is discussed here. Note even though modules might be offered for free else where, if the derivative software uses even one module from Copute.com that is not free else where, then they will pay the same royalty to Copute.com. Thus the freedom of offering non-exclusivity, does not render Copute.com's business model impotent. In fact, it adds inertia and of course some module owners will not want to offer their modules for free else where (they might offer them else where not for free, which again would not hurt Copute.com's business model, but rather add inertia to it).
Module Improvement Ownership
But who owns the module when someone other than the original owner improves it (bug fix, refinement, etc)? I can't think of a metric of value for such improvements to modules. The best idea I have so far is to have a very flexible versioning scheme, give minimally 1% of "remaining value" for each fix, and give the option for existing owners of a module to award an increase to that minimum. Let me explain.
A module is created and offered by the owner(s) on Copute.com. This is open source, meaning anyone could offer an improvement. So how do we assign the ownership percentage of the improved version of the module? On the first improvement, the original owner has 100% share of ownership, so we minimally and automatically award 1% of that to the submitter of the improved version. The original owner has the option of login to his account on Copute.com and increase that minimum award (only the existing owners will best have the knowledge to evaluate an improvement and decide its value). So then there are two owners, minimally the split is 99% and 1%. Then a second improvement is made, so minimally 1% of 99% is automatically awarded, which is 0.99%. And this 1% is split proportionally so the first owner has 98.02%, second has 0.99%, and third has 0.99%. Either of the two prior owners can optionally award more, giving up some of their percent. Eventually for example, it may reach where original owner has say 50%, and each improvement owner has (minimally) 0.5%. The point is that future improvements slow down the erosion of prior ownership, unless the prior owners choose to award higher value. This is desirable because of the law of diminishing returns (80/20 rule).
Note that projects (e.g. Linux, Firefox, etc) tend to have 1000s of bug fixes and improvements, thus the original creators would be diluted to near 0% on a project-scale granularity if this model was applied, but within a project, the modules typically only require dozens of fixes/improvements, so this model encourages original owners to create smaller (more granular, thus more) reusable modules. So we see the benefit of smaller things and reusability scales economically well; whereas, project-level granularity does not (can not be fungible).
Also note that the above concept is economic ownership, i.e. the percentage of royalties, not control. If a module is released on Copute.com marketplace, the control is relinquished to the terms of the license which allows unlimited modifications and forks but requires them to reside on the Copute.com marketplace, not even Copute.com has any centralized decision control. The original owner of the module might release the original module else where with different licensing terms (Copute.com marketplace is non-exclusive), but that owner can not release the improvements else where and neither can the submitter of the improvement release the improvement else where (unless making the improvement to a identical fork in a different marketplace of the same module). So we see that Copute.com's forks potentially become unique from the forks in other marketplaces, unless all submitters of improvements submit to all marketplaces.
Versioning and Forking
Copute language, and thus the complementary (and orthogonal) marketplace on Copute.com, is all about freedom-of-choice. Thus we don't want to limit in any way. So submitters of improvements should be free to omit selective prior improvements, i.e. to create a new fork in the version trail of the module. I have an idea for a version numbering scheme that accommodates this with explicit enumeration of omissions. A version number will be numbers separated by periods, e.g. "1.0.1", where each period identifies a fork. So the first version of a module is simply "1". If an improvement is made to that module, then the version becomes "2", the next "3", etc.. If an improvement omits improvement "2" and keeps "3", then the version number is "1:3.1". If an improvement omits improvement "3" forward, then the version number is "2.1". If an improvement omits improvement "2" and keeps "3" through "5", then the version number is "1:3-5.1". If an improvement omits improvement "3" and keeps "4" through "5", then the version number is "2:4-5.1", which is equivalent shorthand for "1-2:4-5.1". The ".1" becomes a new fork and then it can also be forked, so version numbers can end up with multiple periods.
Note that when a submitter proposes to create an improvement without forking, this will initially be created as a fork, then it must be accepted by one of the owners of the trunk, in order to become a numbered as a trunk improvement instead of a fork. This is necessary so that owners don't have to create forks to indicate they don't approve of an improvement.
Note there is no automated way to precisely identify that a particular improvement actually includes and omits the improvements it says it does. Thus it is allowed to have two forks of the same of fork, and this is done using a "..n.1", e.g. "1:3..1.1" and "1:3..2.1" would be an alternative forks of "1:3.1".
Note this proposed module version numbering scheme bears almost no correlation of semantics to existing project version numbering schemes that also employ periods.
So which version will people use if there are multiple forks (which tip of which branch of a metaphorical tree)? They will use which ever fork they think best fits their needs. The popularity of use of forks in derivative software, will be a vote on the fitness, i.e. the free market will anneal the answer. The module owner(s) may endorse a particular fork by making improvements in it. This is superior to the current open source model where forking is highly discouraged and the project owners (current languages don't support module fine granularity of reuse) are (hopefully "benevolent") dictators "for life" (Linus Torvalds and others have been referred to with the quoted title).
[1] Salaries are those of the company developing the derivative software product, not any salaries that were paid to develop the modules that are offered on Copute.com.
Please note that this is a thought experiment and it may not be implemented for Copute. There is ongoing thinking on how to obtain the best free market pricing model. Preferably it should be entirely decentralized.
Some private writings solved most of the issues in the Copute.com economic model. The remaining issue is modifications to existing modules.
Summary of Economic Model
First a brief summary of prior two posts. Copute will be a new open source (free) computer language that fosters reusability of modules of programming (to finer granularity of module size than with other languages). Any one can use Copute for free, even modify it, and there is no requirement to use the Copute.com marketplace when using the Copute language. In addition, Copute.com will offer a non-exclusive (same modules can be offered else where, even offered for free) marketplace for programmers to offer their modules to other developers, so that duplication of programming effort (modules) can be reduced. Any derivative software product created from these Copute.com modules will pay the greater of 1% of gross sales, or 10% of salaries[1] to develop, that derivative software product. Copute.com will distribute these royalties to module owners (after deducting Copute.com's fee, perhaps 1% of the royalties) proportionally based on an automated statistically sampled profiling of the derivative software in use to determine the percent of time the CPU spends in each module.
Copute.com won't have a monopoly, because any other site can offer a similar marketplace for Copute modules (or even modules of any computer language, although the Copute language will be best for reusing modules). This marketplace is to make modules fungible and the importance of this is discussed here. Note even though modules might be offered for free else where, if the derivative software uses even one module from Copute.com that is not free else where, then they will pay the same royalty to Copute.com. Thus the freedom of offering non-exclusivity, does not render Copute.com's business model impotent. In fact, it adds inertia and of course some module owners will not want to offer their modules for free else where (they might offer them else where not for free, which again would not hurt Copute.com's business model, but rather add inertia to it).
Module Improvement Ownership
But who owns the module when someone other than the original owner improves it (bug fix, refinement, etc)? I can't think of a metric of value for such improvements to modules. The best idea I have so far is to have a very flexible versioning scheme, give minimally 1% of "remaining value" for each fix, and give the option for existing owners of a module to award an increase to that minimum. Let me explain.
A module is created and offered by the owner(s) on Copute.com. This is open source, meaning anyone could offer an improvement. So how do we assign the ownership percentage of the improved version of the module? On the first improvement, the original owner has 100% share of ownership, so we minimally and automatically award 1% of that to the submitter of the improved version. The original owner has the option of login to his account on Copute.com and increase that minimum award (only the existing owners will best have the knowledge to evaluate an improvement and decide its value). So then there are two owners, minimally the split is 99% and 1%. Then a second improvement is made, so minimally 1% of 99% is automatically awarded, which is 0.99%. And this 1% is split proportionally so the first owner has 98.02%, second has 0.99%, and third has 0.99%. Either of the two prior owners can optionally award more, giving up some of their percent. Eventually for example, it may reach where original owner has say 50%, and each improvement owner has (minimally) 0.5%. The point is that future improvements slow down the erosion of prior ownership, unless the prior owners choose to award higher value. This is desirable because of the law of diminishing returns (80/20 rule).
Note that projects (e.g. Linux, Firefox, etc) tend to have 1000s of bug fixes and improvements, thus the original creators would be diluted to near 0% on a project-scale granularity if this model was applied, but within a project, the modules typically only require dozens of fixes/improvements, so this model encourages original owners to create smaller (more granular, thus more) reusable modules. So we see the benefit of smaller things and reusability scales economically well; whereas, project-level granularity does not (can not be fungible).
Also note that the above concept is economic ownership, i.e. the percentage of royalties, not control. If a module is released on Copute.com marketplace, the control is relinquished to the terms of the license which allows unlimited modifications and forks but requires them to reside on the Copute.com marketplace, not even Copute.com has any centralized decision control. The original owner of the module might release the original module else where with different licensing terms (Copute.com marketplace is non-exclusive), but that owner can not release the improvements else where and neither can the submitter of the improvement release the improvement else where (unless making the improvement to a identical fork in a different marketplace of the same module). So we see that Copute.com's forks potentially become unique from the forks in other marketplaces, unless all submitters of improvements submit to all marketplaces.
Versioning and Forking
Copute language, and thus the complementary (and orthogonal) marketplace on Copute.com, is all about freedom-of-choice. Thus we don't want to limit in any way. So submitters of improvements should be free to omit selective prior improvements, i.e. to create a new fork in the version trail of the module. I have an idea for a version numbering scheme that accommodates this with explicit enumeration of omissions. A version number will be numbers separated by periods, e.g. "1.0.1", where each period identifies a fork. So the first version of a module is simply "1". If an improvement is made to that module, then the version becomes "2", the next "3", etc.. If an improvement omits improvement "2" and keeps "3", then the version number is "1:3.1". If an improvement omits improvement "3" forward, then the version number is "2.1". If an improvement omits improvement "2" and keeps "3" through "5", then the version number is "1:3-5.1". If an improvement omits improvement "3" and keeps "4" through "5", then the version number is "2:4-5.1", which is equivalent shorthand for "1-2:4-5.1". The ".1" becomes a new fork and then it can also be forked, so version numbers can end up with multiple periods.
Note that when a submitter proposes to create an improvement without forking, this will initially be created as a fork, then it must be accepted by one of the owners of the trunk, in order to become a numbered as a trunk improvement instead of a fork. This is necessary so that owners don't have to create forks to indicate they don't approve of an improvement.
Note there is no automated way to precisely identify that a particular improvement actually includes and omits the improvements it says it does. Thus it is allowed to have two forks of the same of fork, and this is done using a "..n.1", e.g. "1:3..1.1" and "1:3..2.1" would be an alternative forks of "1:3.1".
Note this proposed module version numbering scheme bears almost no correlation of semantics to existing project version numbering schemes that also employ periods.
So which version will people use if there are multiple forks (which tip of which branch of a metaphorical tree)? They will use which ever fork they think best fits their needs. The popularity of use of forks in derivative software, will be a vote on the fitness, i.e. the free market will anneal the answer. The module owner(s) may endorse a particular fork by making improvements in it. This is superior to the current open source model where forking is highly discouraged and the project owners (current languages don't support module fine granularity of reuse) are (hopefully "benevolent") dictators "for life" (Linus Torvalds and others have been referred to with the quoted title).
[1] Salaries are those of the company developing the derivative software product, not any salaries that were paid to develop the modules that are offered on Copute.com.
Last edited by Shelby on Wed Mar 27, 2013 12:06 am; edited 2 times in total
Social networking is driven by group formation
Everybody wants to belong to clique that they identify with:
http://blogs.forbes.com/chunkamui/2011/07/15/why-google-is-poised-to-fail/
http://blogs.forbes.com/chunkamui/2011/01/12/why-facebook-beat-myspace-and-why-myspaces-revised-strategy-will-probably-fail/
http://blogs.forbes.com/chunkamui/2011/07/15/why-google-is-poised-to-fail/
http://blogs.forbes.com/chunkamui/2011/01/12/why-facebook-beat-myspace-and-why-myspaces-revised-strategy-will-probably-fail/
Scala's abstract type doesn't solve the Expression Problem, but interfaces do
A research paper by the creator of Scala explains how Scala's abstract type can enable extension simultaneously in the subtype and method dimensions.
However, my understanding is that it doesn't really provide a solution, and interfaces do. The research paper implies that if Exp1 extends Exp and adds a show method, then a class Plus1( l:Exp1, r:Exp1 ) extends Plus( l, r ) with Exp1 would not work because either the compiler would complain, or two instances of l and r would need to be saved in the class, an Exp copy in Plus and an Exp1 copy in Plus1. As far as I can see, the proposed solution using an abstract type exp isn't really a solution to the Expression Problem, because the abstract type needs to be closed before it can be referenced by legacy code that would be extended by Plus1. Thus I don't see how legacy code can be extended with new methods without recompilation with this paradigm, to satisfy the Expression Problem.
Instead I would suggest that Plus1 does not need to extend Plus (any shared implementation could be inherited via a mixin), because consumers of Plus and Plus1 should be inputting an interface Exp and Exp1 respectively and never a class Plus or Plus1. Thus a Plus1 can interopt with legacy code that expects an Exp.
This is another example of why Copute will not allow implementation (e.g. class or mixin) to be referenced-- only interfaces should be referenceable. This is yet another evidence that this is a major breakthrough design decision for Copute.
==========
My analysis thus far is that Scala's explicitly typed self references which are employed in the "Functional Decomposition" section of the above research paper, are also a complication which become unnecessary if interface and implementation inheritance are unconflated, i.e. for the same reason explained above. Interfaces can be employed to provide orthogonal, flattened abstraction of extensible implementation, when all references (e.g. function inputs and outputs) are interfaces, without binding to the inheritance of implementation. The general conclusion of my current understanding is that the complexity of Scala's abstract typing is due to supporting extension with conflation of interface and implementation inheritance. Remove the conflation as Copute proposes, then these complexities in the type system can be discarded. This should make Copute code much more intuitive, less verbose, and eliminate these examples of complex typing hierarchies to support open extension.
Future examples will elucidate these claims.
However, my understanding is that it doesn't really provide a solution, and interfaces do. The research paper implies that if Exp1 extends Exp and adds a show method, then a class Plus1( l:Exp1, r:Exp1 ) extends Plus( l, r ) with Exp1 would not work because either the compiler would complain, or two instances of l and r would need to be saved in the class, an Exp copy in Plus and an Exp1 copy in Plus1. As far as I can see, the proposed solution using an abstract type exp isn't really a solution to the Expression Problem, because the abstract type needs to be closed before it can be referenced by legacy code that would be extended by Plus1. Thus I don't see how legacy code can be extended with new methods without recompilation with this paradigm, to satisfy the Expression Problem.
Instead I would suggest that Plus1 does not need to extend Plus (any shared implementation could be inherited via a mixin), because consumers of Plus and Plus1 should be inputting an interface Exp and Exp1 respectively and never a class Plus or Plus1. Thus a Plus1 can interopt with legacy code that expects an Exp.
This is another example of why Copute will not allow implementation (e.g. class or mixin) to be referenced-- only interfaces should be referenceable. This is yet another evidence that this is a major breakthrough design decision for Copute.
==========
My analysis thus far is that Scala's explicitly typed self references which are employed in the "Functional Decomposition" section of the above research paper, are also a complication which become unnecessary if interface and implementation inheritance are unconflated, i.e. for the same reason explained above. Interfaces can be employed to provide orthogonal, flattened abstraction of extensible implementation, when all references (e.g. function inputs and outputs) are interfaces, without binding to the inheritance of implementation. The general conclusion of my current understanding is that the complexity of Scala's abstract typing is due to supporting extension with conflation of interface and implementation inheritance. Remove the conflation as Copute proposes, then these complexities in the type system can be discarded. This should make Copute code much more intuitive, less verbose, and eliminate these examples of complex typing hierarchies to support open extension.
Future examples will elucidate these claims.
OOP in Haskell
The description of Haskell's typing system is complex, but the conclusions with respect to the Expression Problem are not. Type construction (i.e. the constructor type name and any constructor argument values as type members) is orthogonal to type methods (i.e. interface) in Haskell. Thus any preexisting constructable type can through extension be declared to implicitly inherit from any interface, but from the perspective of the interface these are just new subtypes. So in Haskell to be extensible and solve the Expression Problem, functions should input interfaces instead of constructable types, but this is not enforced, so interface and implementation (i.e. constructable types) can be conflated. So in terms of the Expression Problem, Haskell is nearly equivalent in power and flexibility to a subtyped OOP program which only references interfaces and never implementation (i.e. class or mixin), except that the use of pattern matching on function inputs in Haskell is not extensible, because Haskell pattern matches only on constructable types and not "interfaces" (i.e. interface is a typeclass). See also What is Pattern Matching? Also Haskell can't pattern match (guard) on the instances of an "interface".
Note however that a subtyped OOP program which references implementation, is not as extensible as Haskell, because then preexisting data types can not be given new interfaces. If implementation is never referenced, then data types that may contain implementation (e.g. class or mixin) can be given new interfaces in terms of the Expression Problem, via multiple inheritance of the data type and the new interface in a new data type, because the supertype data type was never referenced. Thus Scala's implicit conversion functions[6] defeat subtyping (as a solution to the Expression Problem) because they reference implementation by always return the same constructable data type (which is the implicit substitution for the input type), for no other gain in flexibility and expressive power (when Copute's static interface methods are available to emulate Haskell's typeclass interfaces), as compared to an OOP program that does not reference implementation.
[1] The Haskell 98 Report, section 4.2.1 Algebraic Datatype Declarations
[2] A Gentle Introduction to Haskell, Version 98, section 5 Type Classes and Overloading
[3] The Haskell 98 Report, section 4.6 Kind Inference
[4] A Gentle Introduction to Haskell, Version 98, section 6.2 Field Labels
[5] The Haskell 98 Report, section 5.8 Abstract Datatypes
[6] Programming in Scala, Odersky, Spoon, Venners, chapter 21 Implicit Conversions and Parameters.
[7] Modules Matter Most, Existential Type blog, Robert Harper.
[8] Modular Type Classes, Dreyer, Harper, Chakravarty
[9] Software Extension and Integration with Type Classes, by Ralf Lämmel and Klaus Ostermann
Note however that a subtyped OOP program which references implementation, is not as extensible as Haskell, because then preexisting data types can not be given new interfaces. If implementation is never referenced, then data types that may contain implementation (e.g. class or mixin) can be given new interfaces in terms of the Expression Problem, via multiple inheritance of the data type and the new interface in a new data type, because the supertype data type was never referenced. Thus Scala's implicit conversion functions[6] defeat subtyping (as a solution to the Expression Problem) because they reference implementation by always return the same constructable data type (which is the implicit substitution for the input type), for no other gain in flexibility and expressive power (when Copute's static interface methods are available to emulate Haskell's typeclass interfaces), as compared to an OOP program that does not reference implementation.
- Data type class ('data') has constructors[1], and methods are declared orthogonally as an 'instance' (subtype) of a Typeclass ('class') interface.[2][9] Haskell uses the keyword 'data' for a data type class (implementation) and 'class' for an interface.
- The explicit parametric polymorphism of a data type class is kind 0 (i.e. none) or 1.[1] Note 'tyvar' of 'simpletype' has arity >= 0, so 0 is kind 0, and > 0 is kind 1. Haskell's type context class/data A a => B a is roughly trait/class B[a <: A] in Scala. An implicit higher-kind is declared with two or more 'tyvar' type parameters where one is the type parameter of the other in a constructor argument[3] (also the kind of an instance of an interface is inferred, see Functor example[2]). It is not possible to declare that a future subtype of 'tycon' must be a kind > 0, because there is no subtyping of data type class (the constructors are the possible subtypes and it is final). Any data type class can be declared to be the 'instance' (subtype) of a 'typeclass' interface at any time, thus a data type class has infinite potential supertypes[9]. Note that the 'context' is a supertype bound applied to a type parameter, but it is not exclusive, meaning the data type class of the type parameter implements (is an 'instance' of) that interface, but may also implement infinite others, i.e. any data type can be coerced to any interface via an 'instance' declaration (i.e. an implicit conversion). Any function that inputs a data type class (1st letter uppercase in Haskell), can not be extended per the Expression Problem. Any function that inputs a type parameter without an interface context, is implicitly structurally polymorphic and solves the Expression Problem with implicit structural subtyping. Any function that inputs a type parameter constrained by an interface context, is explicitly (requires the type to a declared 'instance' of the interface) structurally polymorphic and solves the Expression Problem with explicit structural subtyping.
Unlike subtyped OOP, Haskell allows a new interface to be implemented by a preexisting data typeclass (without recompiling it)[9], because the declaration of methods (with 'instance') is orthogonal to the declaration of a data type class. However, functions which wish to be extensible in the Expression Problem will not input data type classes, but rather interfaces. And the supertype(s) (i.e. context) of a Haskell Typeclass interface are final. Thus adding an interface to a preexisting data type class in Haskell, is analogous to creating a new data type in OOP and inheriting the preexisting implementation (class or mixin), and the new interface which inherits from the old interface, because no code should be inputting the preexisting data type any way (remember "constructors are evil", always input a factory method). Btw, this is why I think Scala's implicit conversion functions are unnecessary, and also dangerous because they defeat subtyping. - Constructor arguments are immutable data members of the class[1] which can be accessed by position with pattern matching. The lack of argument names gives no hint as to the purpose of the arguments (e.g. data Point a = Point a a, means the 2 coordinates have same type, but doesn't hint that they are x,y cartesian coordinates), see next item for a solution.[4]
- Constructor arguments may optionally be accessed with named getter methods.[4]
- Private constructors employing 'module' and 'data'.[5]
- Mixins employing 'module' and 'newtype' where each method must be explicitly inherited.[5]
- A Haskell interface (typeclass) may contain a method with a default implementation (and it may call the other abstract methods of the interface), thus Haskell conflates implementation and interface.
- An interesting insight from #2 and #3 above, that Haskell has a weakness that does not exist in subtyped OOP, is that since functions that input data type classes are not extensible in the Expression Problem, thus a function is not extensible if it has input(s) that are pattern matched on constructor argument(s), unless we (perhaps use View Patterns would work or) provide a Typeclass interface to them and input the Typeclass interface instead, e.g.
- Code:
class IColor c where
red :: MinMaxValue a => c a -> a // note Haskell requires the type parameters in the interface to have same
green :: MinMaxValue a => c a -> a // context as in the instance, which means Haskell is not SPOT for bounds
blue :: MinMaxValue a => c a -> a // on type parameters, see section 7.3 of Generics of a Higher Kind
data Color a = Red | Green | Blue | RGB {r, g, b :: a}
instance MinMaxValue a => IColor Color where
red (Red a) = maxval a
red (Green a) = minval a
red (Blue a) = minval a
green (Red a) = minval a
green (Green a) = maxval a
green (Blue a) = minval a
blue (Red a) = minval a
blue (Green a) = minval a
blue (Blue a) = maxval a
red (RGB r _ _) = r
green (RGB _ g _) = g
blue (RGB _ _ b) = b
- Haskell does not allow to declare the same data type as inheriting multiple different instances of the same interface (typeclass), i.e. Haskell allows many supertype interfaces but they must all be a unique interface.[7][8] (note, a typeclass can be multiply inherited if it contains no methods, e.g. class Exp) Thus an Int can't implement an Ord(ering) in multiple ways, i.e. Int can't be an instance of Ord and OrdDivisible, if OrdDivisible inherits from Ord, thus functions that would normally compose on Ord, have to compose separately on Int and IntDivisible. What a mess. In short, interfaces can multiple inherit, but the diamond problem isn't allowed.
[1] The Haskell 98 Report, section 4.2.1 Algebraic Datatype Declarations
[2] A Gentle Introduction to Haskell, Version 98, section 5 Type Classes and Overloading
[3] The Haskell 98 Report, section 4.6 Kind Inference
[4] A Gentle Introduction to Haskell, Version 98, section 6.2 Field Labels
[5] The Haskell 98 Report, section 5.8 Abstract Datatypes
[6] Programming in Scala, Odersky, Spoon, Venners, chapter 21 Implicit Conversions and Parameters.
[7] Modules Matter Most, Existential Type blog, Robert Harper.
[8] Modular Type Classes, Dreyer, Harper, Chakravarty
[9] Software Extension and Integration with Type Classes, by Ralf Lämmel and Klaus Ostermann
Last edited by Shelby on Tue Dec 06, 2011 8:03 am; edited 34 times in total
re: Types are Anti-Modular
http://gbracha.blogspot.com/2011/06/types-are-anti-modular.html?showComment=1311534787150#c2109159097718758997
Shelby wrote:For independent compilation (and design) of typed interfaces, modules can declare their interfaces independently, then a glue module coerces their interfaces to interopt.
Imagine a client that integrates a test server. The interface of the test server is the external interface of the client.
Or in other words, the boundary between intra-module modularity and inter-module modularity is arbitrary and thus doesn't exist.
All type systems leak implementation to the extent that the semantic invariants are not checked by the compiler. Checking more invariants (stronger typing) is an attempt to leak less implementation, to catch more errors at compile-time.
P.S. the failure of an arbitrary barrier to the free market is expected by Coase's Theorem. Coase's Theorem is a special case of 2nd law of thermo, and the 2nd law states the the universe trends to maximum independent (more precisely orthogonal) actors (i.e. disorder/entropy).
Bool is blindness
Don't throw away the context by creating a Bool, pass the context around instead:
http://existentialtype.wordpress.com/2011/03/15/boolean-blindness/
And equality is not decidable for all types:
http://existentialtype.wordpress.com/2011/03/15/dont-mention-equality/
http://existentialtype.wordpress.com/2011/03/15/boolean-blindness/
And equality is not decidable for all types:
http://existentialtype.wordpress.com/2011/03/15/dont-mention-equality/
Monad/Comonad duality, also lazy/eager duality
http://blog.sigfpe.com/2006/06/monads-kleisli-arrows-comonads-and.html?showComment=1311779802514#c3359419049328160234
Shelby Moore III wrote:
Edit#1: The word "compositionality" can refer to the principle that the meaning of the terms of the denotational semantics, e.g. the Comonad model, should depend only on the meaning of the fragments of the syntax it employs, i.e. the subterms. What I understand from a research paper[1], is that the compositional degrees-of-freedom of the higher-level language created by the higher-level denotational semantics, is dependent on the "free variables" in the compositionality fragments. Thus the compositionality can be affected by the evaluation order and other operational semantics. Due to the Halting Problem, where the lower-level semantics is Turing complete, the subterms will never be 100% context-free. I have proposed that when the higher-level semantics unifies lower-level concepts, compositionality is advanced. Please see the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics at http://copute.com for more explanation.
[1] Declarative Continuations and Categorical Duality, Filinski, section 1.4.1, The problem of direct semantics
Edit#2: The composition of functions which do not input a comonad, with those impure ones that do, can be performed with the State monad. The comonad method, Comonad[T] -> T is impure (see explanation in the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics -> Category Theory at http://copute.com), so we must thread it across functions which might be pure, using the State monad. Thus the answer to my last question is "correct", we can purely compose any pure functions which input a Comonad, because the comonad method, (Comonad[T] -> A) -> Comonad[T] -> Comonad[A] is pure if is the input function, Comonad[T] -> A. Also the answer to my other question is "incorrect", because a monad can abstract a comonad, but only for the history of observations (see explanation at same section) because a monad has no interface for creating a new observation.]
Shelby Moore III wrote:
Shelby Moore III wrote:
Monad is the model for any parametric type that we know the generative structure of, so we can compose functions on lifted outputs, because the type knows how to lift (i.e. construct, generate, 'unit' or 'return') instances of its type parameter to its structure.
Comonad is the model for any parametric type that we don't know how to generate its structure, but we can observe instances of the type parameter its structure as they occur. We will only know its final structure when it is destructed, and observation ceases. We can't lift instances of its type parameter to its structure, so we can't compose functions on outputs. Instead, we can compose functions with lifted inputs (and optionally outputs, i.e. map on observations), because the type has observations.
Conceptually monad vs. comand duality is related to the duality of induction vs. coinduction, and initial vs. final (least vs. greatest) fixpoint, because we can generate structure for a type that has an initiality, but we can only observe structure until we reach a finality.
Induction and Co-induction
Initiality and Finality
Wikipedia Coinduction
I had visited this blog page before (and not completely grasped it), then I read this page again trying conceptualize the sum vs. products duality for eager vs. lazy evaluation.
Perhaps I am in error, but it appears that with lazy evaluation and corecursion, monad can be used instead of comonad, e.g. isn't it true a stream can be abstracted by a monadic list in haskell?
So dually, am I correct to interpret that laziness isn't necessary for modeling compositionality of coinductive types, when there is comonad in the pure (referential transparent) part where the composition is?
Edit#1: The word "compositionality" can refer to the principle that the meaning of the terms of the denotational semantics, e.g. the Comonad model, should depend only on the meaning of the fragments of the syntax it employs, i.e. the subterms. What I understand from a research paper[1], is that the compositional degrees-of-freedom of the higher-level language created by the higher-level denotational semantics, is dependent on the "free variables" in the compositionality fragments. Thus the compositionality can be affected by the evaluation order and other operational semantics. Due to the Halting Problem, where the lower-level semantics is Turing complete, the subterms will never be 100% context-free. I have proposed that when the higher-level semantics unifies lower-level concepts, compositionality is advanced. Please see the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics at http://copute.com for more explanation.
[1] Declarative Continuations and Categorical Duality, Filinski, section 1.4.1, The problem of direct semantics
Edit#2: The composition of functions which do not input a comonad, with those impure ones that do, can be performed with the State monad. The comonad method, Comonad[T] -> T is impure (see explanation in the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics -> Category Theory at http://copute.com), so we must thread it across functions which might be pure, using the State monad. Thus the answer to my last question is "correct", we can purely compose any pure functions which input a Comonad, because the comonad method, (Comonad[T] -> A) -> Comonad[T] -> Comonad[A] is pure if is the input function, Comonad[T] -> A. Also the answer to my other question is "incorrect", because a monad can abstract a comonad, but only for the history of observations (see explanation at same section) because a monad has no interface for creating a new observation.]
Shelby Moore III wrote:
Followup to the two questions in my prior comment.
Monad can't abstract a comonad, because it has no method, m a -> a, for creating a new observation. A monad can abstract the history of prior observations. Afaics, for a language with multiple inheritance, a subtype of comonad could also be a subtype of monad, thus providing a monadic interface to the history of observations. This is possible because the comonad observation factory method, m a -> a, is impure (the state of the comonad blackbox changes when history is created from it).
Composition of functions, m a -> b, which input a comonad is pure (i.e. no side-effects, referentially transparent, declarative not imperative) where those functions are pure (e.g. they do not invoke m a -> a to create a new observation). In short, the method (m a -> b) -> m a -> m b is pure if m a -> b is.
Last edited by Shelby on Sat Jul 30, 2011 1:01 pm; edited 7 times in total
Call-by-need memoizes arguments, not functions
http://augustss.blogspot.com/2011/04/ugly-memoization-heres-problem-that-i.html#3700840423100518476
Shelby Moore III wrote:
Shelby Moore III wrote:
@francisco: Haskell's call-by-need (lazy evaluation) memoizes function arguments, but not functions.
=====verbose explanation======
The arguments to a function are thunked, meaning the arguments get evaluated only once, and only when they are needed inside the function. This is not the same as checking if a function call was previously called with the same arguments.
If the argument is a function, the thunk will call it once without checking if that function had been called else where with the same arguments.
Thunks are conceptually similar to parameterless anonymous functions with a closure on the argument, a boolean, and a variable to store the result of the argument evaluation. Thus thunks incur no lookup costs, because they are parameterless. The cost of the thunk is the check on the boolean.
Thunks give the same amount of memoization as call-by-value (which doesn't use thunks). Neither call-by-need nor call-by-value memoize function calls. Rather both do not evaluate the same argument more than once. Call-by-need delays that evaluation with a thunk until the argument is first used within the function.
Apologies for being so verbose, but Google didn't find an explanation that made this clear, so I figured I would cover all the angles in this comment.
Eager vs. Lazy evaluation
See also:
https://goldwetrust.forumotion.com/t112p165-computers#4430
========================================
http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323
Shelby Moore III wrote:
https://goldwetrust.forumotion.com/t112p165-computers#4430
========================================
http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323
Shelby Moore III wrote:
Appreciated this article. Some points:
1. Lazy also has latency indeterminism (relative to the imperative world, e.g. IO monad).
2. For a compiler strategy that dynamically subdivided map for parallel execution on multiple cores requires it not be lazy.
3. any = or . map is not "wrong" for eager (strict) evaluation when there is referential transparency. It is slower in sequential time, but maybe faster in parallel execution.
4. Wikipedia says that for Big O notation, 0(n) is faster than O(n log n) = O(log n!). @Lennart, I think you meant to say that O(n) is faster than O(n log n).
5. Given that laziness causes space and latency indeterminism, if the main reason to use lazy is to avoid the performance hit for conjunctive functional composition over functors, then only functions which output applicable functors need apply laziness. As @martin (Odersky) suggested, provide lazy and eager versions of these functions. Thus eager by default with optional lazy annotations would be preferred.
Principle and 80/20 rule. 80+% of the programmers in world are not likely to grasp debugging lazy space leaks. It will only take one really difficult one to waste a work-week, and that will be the end of it. And what is the big payoff, especially with the parallelism freight train bearing down? Someone claimed that perhaps the biggest advance in mainstream languages since C, was GC (perhaps Java's main improvement over C++). Thus, if the typical programmer couldn't do ref counting without creating cycles, I don't think they will ever grasp lazy space and latency indeterminism. I am approaching this from wanting a mainstream language which can replace Java for statically typed.
Am I correct to say?
RT eager code evaluated as RT lazy could exhibit previously unseen space and latency issues.
RT lazy code evaluated as RT eager could exhibit previously unseen non-termination, e.g. infinite recursion and exceptions.
@augustss Rash judgments w/o experience is annoying. Two decades of programming, and copious reading is all I can humbly offer at this moment. This is imperfect and warrants my caution. I appreciate factual correction.
My point is that with eager, debugging the changes in the program's state machine at any function step, will be bounded to the function hierarchy inside the body of the function, so the programmer can correlate changes in the state machine to what the function is expected to do.
Whereas, with lazy any function may backtrack into functions that were in the argument hierarchy of the current function, and inside functions called an indeterminant time prior. Afaics, lazy debugging should be roughly analogous to debugging random event callbacks, and reverse engineering the state machine in a blackbox event generation module.
As I understand from Filinksi's thesis, eager and lazy are categorical duals in terms of the induction and coinductive values in the program. Eager doesn't have products (e.g. conjunctive logic, "and") and lazy doesn't have coproducts (e.g. disjunctive, "or"). So this means that lazy imposes imperative control logic incorrectness from the outside-in, because coinductive types are built from observations and their structure (coalgebra) is not known until the finality when the program ends. Whereas, eager's incorrectness is from the inside-out, because inductive types have a a priori known structure (algebra) built from an initiality. Afaics, this explains why debugging eager has a known constructive structure and debugging lazy is analogous to guessing the structure of a blackbox event callback generation module.
My main goal is to factually categorize the tradeoffs. I am open to finding the advantages of lazy. I wrote a lot more about this at my site. I might be wrong, and that is why I am here to read, share, and learn. Thanks very much.
Could you tell me why lazy (with optional eager) is better for you than referentially transparent (RT) eager (with optional lazy), other than the speed of conjunctive (products) functional composition? Your lazy binding and lazy functions points should be doable with terse lazy syntax at the let or function site only, in a well designed eager language. Infinite lazy types can be done with the optional lazy in an eager language with such optional lazy syntax. Those seem to be superficial issues with solutions, unlike the debugging indeterminism, which is fundamental and unavoidable.
I wish this post could be shorter and still convey all my points.
Idea: perhaps deforestation could someday automate the decision on which eager code paths should be evaluated lazily.
This would perhaps make our debate mute, and also perhaps provide the degrees-of-freedom to optimize the parallel and sequential execution time trade-off at runtime. I suppose this has been looked at before. I don't know if this is not possible, because I haven't studied deeply enough the research on automated deforestation.
I understand the "cheap deforestation" algorithm in Haskell only works on lazy code paths, and only those with a certain "foldr/build" structure.
Perhaps an algorithm could flatten to their bodies, the function calls in eager referentially transparent (i.e. no side-effects) code paths but in lazy order until a cyclical structure is identified, then "close the knot" on that cyclical structure. Perhaps there is some theorem that such a structure is bounded (i.e. "safe") in the transformation of coproducts to products correctness, i.e. space and latency determinism.
Your any = or . map example flattens to a cycle (loop) on each element of the functor (e.g. list), which converts each element to a boolean, exits if the result is true, and always discards the converted element in every possible code path of the inputs. That discard proves (assuming RT and thus no side-effects in map) the lazy evaluation of the eager code has no new coproducts, and thus the determinism in space and latency is not transformed.
There are discarded products (map did not complete), thus in a non-total language, some non-termination effects may be lost in the transformation. Any way, I think all exceptions should be converted to types, thus the only non-termination effects remaining are infinite recursion.
There is the added complication that in some languages (those which enforce the separation-of-concerns, interface and implementation), the map in your example may be essentially a virtual method, i.e. selected at runtime for different functors, so the deforestation might need to be a runtime optimization.
@augustss Are you conflating 'strict' with 'not referentially transparent'? I think that is a common misperception because there aren't many strict languages which are also RT. So the experience programmers have with strict languages is non-compositional due to the lack of RT. Afaik, composition degrees-of-freedom is the same for both strict and lazy, given both are RT. The only trade-off is with runtime performance, and that trade-off has pluses and minuses on both sides of the dual. Please correct me if I am wrong on that.
@augustss thanks I now understand your concern. The difference in non-termination evaluation order with runtime exceptions is irrelevant to me, because I proposed that by using types we could eliminate all exceptions (or at least eliminate the catching them from the declarative RT portion of the program). I provide an example of how to eliminate divide-by-zero at copute.com (e.g. a NonZero type that can only be instantiated from case of Signed, thus forcing the check at compile-time before calling the function that has division). A runtime exception means the program is in a random state, i.e. that the denotational semantics is (i.e. the types are) not fit to the actual semantics of the runtime. Exceptions are the antithesis of compositional regardless of the evaluation order.
In my opinion, a greater problem for extension with Haskell (and Scala, ML, etc) is afaics they allow conflation of interface and implementation. That is explained at my site. I have been rushing this (80+ hour weeks), so I may have errors.
Another problem for extension is Haskell doesn't have diamond multiple inheritance. Perhaps you've already mentioned that.
Modular Type Classes, Dreyer, Harper, Chakravarty.
@augustss agreed must insure termination for cases which are not bugs, but infinite recursion is a bug in strict, thus I excluded it for comparing compositionality. No need to go 100% total in strict to get the same compositionality as lazy. Perhaps Coq can insure against infinite recursion bug, but that is an orthogonal issue.
Diamond problem impacts compositionality, because it disables SPOT (single-point-of-truth). For example, Int can't be an instance of Ord and OrdDivisible, if OrdDivisible inherits from Ord. Thus functions that would normally compose on Ord, have to compose separate on Int and IntDivisible, thus do not compose.
@augustuss if 'map pred list' doesn't terminate, it is because list is infinite. But infinite lists break parallelism and require lazy evaluation, so I don't want them as part of a default evaluation strategy. Offering a lazy keyword (i.e. type annotation or lazy type) can enable expression of those infinite constructions, but in my mind, it should discouraged, except where clarity of expression is a priority over parallelism. Did I still miss any predominant use case?
Thus, I concur with what Existential Type replied. For example, pattern match guards for a function, is runtime function overloading (splitting the function into a function for each guard), thus the compile shouldn't evaluate the functions (guard cases) that are not called.
It appears to me (see my idea in a prior comment) that lazy is a ad-hoc runtime non-deterministic approximation of deforestation. We need better automatic deforestation aware algorithms for eager compilers, so the parallelism vs. sequential execution strategy is relegated to the backend and not the in the language.
Last edited by Shelby on Thu Aug 25, 2011 12:49 pm; edited 27 times in total
Computer Assisted Learning is capital's attempt to enslave mankind
I didn't write this, but I agree with it. That is not to say that I don't think some computer-assisted learning can't be useful, but rather that it must be assisted by real human teachers. This is related to the post I made some time ago, refuting the idea that computers could replace humans.
http://www.soc.napier.ac.uk/~cs66/course-notes/sml/cal.htm
http://www.soc.napier.ac.uk/~cs66/course-notes/sml/cal.htm
CAL Rant
The user should have control at all times, you are not forced to go through the material in any particular order and you are expected to skip the dull bits and miss those exercises which are too easy for you. You decide. The author does not believe that CAL is a good way to learn. CAL is a cheap way to learn, the best way to learn is from an interactive, multi functional, intelligent, user friendly human being. The author does not understand how it is that we can no longer afford such luxuries as human teachers in a world that is teeming with under-employed talent. His main objection to CAL is that it brings us closer to "production line" learning. The production line is an invented concept, it was invented by capital in order to better exploit labour. The production line attempts to reduce each task in the manufacturing process to something so easy and mindless that anybody can do it, preferably anything. That way the value of the labour is reduced, the worker need not be trained and the capitalist can treat the worker as a replaceable component in a larger machine. It also ensures that the workers job is dull and joyless, the worker cannot be good at his or her job because the job has been designed to be so boring that it is not possible to do it badly or well, it can merely be done quickly or slowly. Production line thinking has given us much, but nothing worth the cost. We have cheap washing machines which are programmed to self destruct after five years; cars, clothes, shoes - all of our mass produced items have built in limited life spans - this is not an incidental property of the production line, it is an inevitable consequence.
The introduction of CAL is the attempt by capital to control the educators. By allowing robots to teach we devalue the teacher and make him or her into a replaceable component of the education machine. I do not see how such a dehumanizing experience can be regarded as "efficient", the real lesson learned by students is that students are not worth speaking to, that it is a waste of resources to have a person with them. The student learns that the way to succeed is to sit quietly in front of a VDU and get on with it. The interaction is a complete sham - you may go down different paths, but only those paths that I have already thought of, you can only ask those questions which I have decided to answer. You may not challenge me while "interacting". I want students to contradict, to question, to object, to challenge, to revolt, to tear down the old and replace with the new.
Do not sit quietly and work through this material like a battery student. Work with other people, talk to them, help each other out.
OOP in Standard ML (SML)
My prior email needs followup clarification.
From section 2.1 of your paper[1], I assume a functor's 'sig' or 'struct' can recurse, e.g.
I not yet learned how 'map' can be made polymorphic on b, independently of the creation of a concrete instance of f.
With limited study, I infer that 'structure' is a concrete data type where all abstract 'type' and methods must be defined and implemented. That 'signature' is an abstract type to the extent that it has inner abstract 'type' and unimplemented method signature(s). And that 'functor' is a higher kind constructor (for abstract 'sig' or concrete 'struct'), i.e. type parametrization.
Thus roughly the following informal correspondence to Scala:
signature = trait
functor,,,(...) sig = trait,,,[...]
structure = mixin
functor,,,(...) struct = mixin,,,[...]
using ... in ,,, = class ,,, extends ...
So it appears Scala is more generalized than Haskell, and perhaps equivalently so to SML (this would need more study). Scala also has type bounds in the contravariant direction and variance annotations (i.e. these would be the functor parameters in SML), as well as a bottom type Nothing.
Fyi, Iry's popular chart needs correction. Haskell and ML are higher-kinded.
http://james-iry.blogspot.com/2010/05/types-la-chart.html
> ml has higher kinds only through the module system. the class you mention
> is the type of a functor in ml. so, yes, we have this capability, but in
> a different form. the rule of thumb is anything you can do with type
> classes in haskell is but an awkward special case of what you can do with
> modules in ml. my paper "modular type classes" makes this claim
> absolutely precise.
It appears my suggested separation of abstract interface and concrete implementation, would require that sig must not reference any structure nor functor struct, and functions must only reference signature or functor sig.
For Copute, one of the planned modifications of Scala, is trait and functions can only reference traits.
Of course it is given that everything can reference the function constructor (->).
One example of the benefit of such abstraction, is for example if we reference the red, green, blue values of a color type, we are calling interface methods, not matching constructor values. Thus subtypes of color may implement red,green,blue orthogonal to their constructor values.
I am fairly sleepy, I hope this was coherent.
[1] Modular Type Classes, Dreyer, Harper, Chakravarty, section 2.1 Classes are signatures, instances are modules
From section 2.1 of your paper[1], I assume a functor's 'sig' or 'struct' can recurse, e.g.
- Code:
functor f( a : A ) sig
type b
val map : (f(a) -> b) -> f(a) -> f(b)
end
I not yet learned how 'map' can be made polymorphic on b, independently of the creation of a concrete instance of f.
With limited study, I infer that 'structure' is a concrete data type where all abstract 'type' and methods must be defined and implemented. That 'signature' is an abstract type to the extent that it has inner abstract 'type' and unimplemented method signature(s). And that 'functor' is a higher kind constructor (for abstract 'sig' or concrete 'struct'), i.e. type parametrization.
Thus roughly the following informal correspondence to Scala:
signature = trait
functor,,,(...) sig = trait,,,[...]
structure = mixin
functor,,,(...) struct = mixin,,,[...]
using ... in ,,, = class ,,, extends ...
So it appears Scala is more generalized than Haskell, and perhaps equivalently so to SML (this would need more study). Scala also has type bounds in the contravariant direction and variance annotations (i.e. these would be the functor parameters in SML), as well as a bottom type Nothing.
Fyi, Iry's popular chart needs correction. Haskell and ML are higher-kinded.
http://james-iry.blogspot.com/2010/05/types-la-chart.html
> ml has higher kinds only through the module system. the class you mention
> is the type of a functor in ml. so, yes, we have this capability, but in
> a different form. the rule of thumb is anything you can do with type
> classes in haskell is but an awkward special case of what you can do with
> modules in ml. my paper "modular type classes" makes this claim
> absolutely precise.
It appears my suggested separation of abstract interface and concrete implementation, would require that sig must not reference any structure nor functor struct, and functions must only reference signature or functor sig.
For Copute, one of the planned modifications of Scala, is trait and functions can only reference traits.
Of course it is given that everything can reference the function constructor (->).
One example of the benefit of such abstraction, is for example if we reference the red, green, blue values of a color type, we are calling interface methods, not matching constructor values. Thus subtypes of color may implement red,green,blue orthogonal to their constructor values.
I am fairly sleepy, I hope this was coherent.
[1] Modular Type Classes, Dreyer, Harper, Chakravarty, section 2.1 Classes are signatures, instances are modules
Page 8 of 11 • 1, 2, 3 ... 7, 8, 9, 10, 11
Page 8 of 11
Permissions in this forum:
You cannot reply to topics in this forum