Computers:
5 posters
Page 5 of 11
Page 5 of 11 • 1, 2, 3, 4, 5, 6 ... 9, 10, 11
Every 10 years we need a new programming language paradigm
http://creativekarma.com/ee.php/weblog/about/
About now, it time for the one that follows Java (the virtual machine, garbage collection, no pointers, everything is an object) paradigm.
In 1975 I started using “structured programming” techniques in assembly language, and became a true believer.
In 1983 a new era dawned for me as I started doing some C programming on Unix and MS-DOS. For the next five years, I would be programming mixed C/assembly systems running on a variety of platforms including microcoded bit-slice graphics processors, PCs, 68K systems, and mainframes. For the five years after that, I programmed almost exclusively in C on Unix, MS-DOS, and Windows.
Another new era began in 1994 when I started doing object-oriented programming in C++ on Windows. I fell in love with OO, but C++ I wasn’t so sure about. Five years later I came across the Eiffel language, and my feelings for C++ quickly spiraled toward “contempt.”
The following year, 2000, I made the switch to Java and I’ve been working in Java ever since.
About now, it time for the one that follows Java (the virtual machine, garbage collection, no pointers, everything is an object) paradigm.
Copute influenced by
The Wikipedia page for each computer language, lists the languages that it was influenced by.
Copute influenced by in chronological ascending order:
C++
PHP
JavaScript
HaXe
Haskell
Scala
Copute influenced by degree of importance/influence descending order:
HaXe
Haskell
Scala
JavaScript
PHP
C++
P.S. I misread the charts at indeed.com are "percentage growth", not "percentage rate of growth". Thus the 2500% growth of Scala jobs and -50% loss of Cobol jobs, is cumulative, not a variable rate of growth (i.e. distance not 1st derivative = velocity). Looks like Scala's 2500% growth was (afair) primarily in the past 2 years, so it is still an astronomic rate, if sustained, but in any case, the # of years would be significantly more than I computed previously. Also the rate of growth may be slowing already (I would need to back and study the chart carefully).
Copute influenced by in chronological ascending order:
C++
PHP
JavaScript
HaXe
Haskell
Scala
Copute influenced by degree of importance/influence descending order:
HaXe
Haskell
Scala
JavaScript
PHP
C++
P.S. I misread the charts at indeed.com are "percentage growth", not "percentage rate of growth". Thus the 2500% growth of Scala jobs and -50% loss of Cobol jobs, is cumulative, not a variable rate of growth (i.e. distance not 1st derivative = velocity). Looks like Scala's 2500% growth was (afair) primarily in the past 2 years, so it is still an astronomic rate, if sustained, but in any case, the # of years would be significantly more than I computed previously. Also the rate of growth may be slowing already (I would need to back and study the chart carefully).
Doug Pardee rebuttal
http://creativekarma.com/ee.php/weblog/comments/death_to_the_liskov_substitutability_principle/
The problem is not to reuse monoliths but to make interfaces granular enough to reuse the components of the monoliths.
http://creativekarma.com/ee.php/weblog/comments/static_typing_and_scala/
Although I agree that to some extent Scala has some syntax and paradigm complexity overload, afaics you missed the key point that although imperative code does not impact the referential transparency (mutability) of its containing function, FP code is required to obtain parallelism:
https://goldwetrust.forumotion.com/t112p90-computers#4061
I think much of Scala's perceived complexity is the way Scala has been explained so far has been very obtuse, the syntax is just different enough from the C++/Java genre to make it hard to read (takes the mind years to become as comfortable reading a new syntax as it did learning to drive a manual transmission), and many paradigms in Scala are not unified (e.g. lazy vals and by-name parameters are really both just automatic function closures). I am working on improving these issues for Copute and adding referential transparency.
http://copute.com/dev/docs/Copute/ref/intro.html
The problem is not to reuse monoliths but to make interfaces granular enough to reuse the components of the monoliths.
http://creativekarma.com/ee.php/weblog/comments/static_typing_and_scala/
Although I agree that to some extent Scala has some syntax and paradigm complexity overload, afaics you missed the key point that although imperative code does not impact the referential transparency (mutability) of its containing function, FP code is required to obtain parallelism:
https://goldwetrust.forumotion.com/t112p90-computers#4061
I think much of Scala's perceived complexity is the way Scala has been explained so far has been very obtuse, the syntax is just different enough from the C++/Java genre to make it hard to read (takes the mind years to become as comfortable reading a new syntax as it did learning to drive a manual transmission), and many paradigms in Scala are not unified (e.g. lazy vals and by-name parameters are really both just automatic function closures). I am working on improving these issues for Copute and adding referential transparency.
http://copute.com/dev/docs/Copute/ref/intro.html
Last edited by Shelby on Sat Aug 06, 2011 1:14 pm; edited 4 times in total
Compare explanations of mixins
My explanation:
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
Creator of Scala's explanations:
http://www.scala-lang.org/node/117
pg 5, 2.2 Modular Mixin Composition section of Scalable Component Abstractions, Odersky & Zenger, Proceedings of OOPSLA 2005, San Diego, October 2005 (note Copute reverses the inheritance list order).
Which one is more complete, concise, and comprehensible? Don't answer in this thread please.
Here is a worse explanation from out of on the web:
http://debasishg.blogspot.com/2006/04/scala-compose-classes-with-mixins.html
P.S. I post to my goldwetrust.forumotion.com forum, so I have copies of my thoughts to refer back to, so I won't lose the information. It has nothing to do with wanting to target a programmer audience of readers. I don't want a lot of questions and attention from programmers right now, that will just slow me down with their confusion about what I am trying to design. I get plenty enough design input by reading voraciously what others have done and research papers.
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
Creator of Scala's explanations:
http://www.scala-lang.org/node/117
pg 5, 2.2 Modular Mixin Composition section of Scalable Component Abstractions, Odersky & Zenger, Proceedings of OOPSLA 2005, San Diego, October 2005 (note Copute reverses the inheritance list order).
Which one is more complete, concise, and comprehensible? Don't answer in this thread please.
Here is a worse explanation from out of on the web:
http://debasishg.blogspot.com/2006/04/scala-compose-classes-with-mixins.html
P.S. I post to my goldwetrust.forumotion.com forum, so I have copies of my thoughts to refer back to, so I won't lose the information. It has nothing to do with wanting to target a programmer audience of readers. I don't want a lot of questions and attention from programmers right now, that will just slow me down with their confusion about what I am trying to design. I get plenty enough design input by reading voraciously what others have done and research papers.
thoughts about the Halting problem with respect to Copute
I think they will find flaws in my work, but hopefully it will only be flaws that I know exist, because they are matters owned by God, e.g. the ability to predict the future. This is why the Halting Theorem says that it is "undecidable" whether a program will ever halt. It means we can't know the answer. Examples are very simple programs called Cellular Automa. They are very simple rules, maybe only 2 or 3, yet the only way to know what value they will create on the trillionth iteration, is to run them a trillion times. We don't know if they will ever halt or reduce the same pattern. Some produce a repeated pattern for a long time, then start producing another pattern. They have patterns within patterns within patterns, that are revealed only as you run them longer.
We can prove that some programs halt, and that is because they are not Turing complete logic machines. They are limited in what they can express.
My job is now is to give the language a way to express that the futures contracts do not exist in portions of the program. Once we isolate where the futures contracts are, we can isolate those portions that can not be composed freely and will always have bugs and will be "undecidable".
P.S. Remember my work where space-time is just a perception. The the mimic octopus illustrates how we can be fooled by our space-time senses.
We can prove that some programs halt, and that is because they are not Turing complete logic machines. They are limited in what they can express.
My job is now is to give the language a way to express that the futures contracts do not exist in portions of the program. Once we isolate where the futures contracts are, we can isolate those portions that can not be composed freely and will always have bugs and will be "undecidable".
P.S. Remember my work where space-time is just a perception. The the mimic octopus illustrates how we can be fooled by our space-time senses.
Last edited by Shelby on Sat Feb 19, 2011 12:36 pm; edited 1 time in total
Copute does not repeat the "Billion Dollar Mistake"
Copute has an orthogonal Maybe type, not automatically unboxed if not checked.
Other languages default types to nullable, which can cause an unchecked exception on every use of every type.
Other languages default types to nullable, which can cause an unchecked exception on every use of every type.
Was Knuth wrong about coroutines?
Tangentially, Donald Knuth is like an idol to many in computer science. He seems to be a very humble, likable, productive, and super intelligent person.
Regarding his statement about coroutines, "Subroutines are special cases of ... coroutines.". He also wrote the following:
Afaics, does he have it backwards, or is the statement arbitrary? Afaics, coroutines are special cases of subroutines.
Think about it. A subroutine which is referentially transparent, will return the same value for the same inputs.
Thus all a coroutine is doing is restructuring the algorithm such as that each "yield" is a return value from a subroutine. I had made this observation earlier:
http://code.google.com/p/copute/issues/detail?id=24
Let me give an example of a coroutines:
Here it is with subroutines
Regarding his statement about coroutines, "Subroutines are special cases of ... coroutines.". He also wrote the following:
Coroutines are analogous to subroutines, but they are symmetrical with respect to caller and callee: When coroutine A invokes coroutine B, the action of A is temporarily suspended and the action of B resumes where B had most recently left o
Afaics, does he have it backwards, or is the statement arbitrary? Afaics, coroutines are special cases of subroutines.
Think about it. A subroutine which is referentially transparent, will return the same value for the same inputs.
Thus all a coroutine is doing is restructuring the algorithm such as that each "yield" is a return value from a subroutine. I had made this observation earlier:
http://code.google.com/p/copute/issues/detail?id=24
Let me give an example of a coroutines:
- Code:
function TwoDimensionWalk( m, n )
{
for( i = 0; i <= m; ++i )
for( j = 0; j <= n; ++i )
{
// do any thing here
yield to Consume
}
}
function Consume()
{
// do any thing here
yield to TwoDimensionWalk
}
Here it is with subroutines
- Code:
function TwoDimensionWalk( m, n )
{
for( i = 0; i <= m; ++i )
for( j = 0; j <= n; ++i )
{
NextJ()
Consume()
}
}
function NextJ()
{
// do any thing here
}
function Consume()
{
// do any thing here
}
Last edited by Shelby on Sat Feb 19, 2011 12:33 pm; edited 1 time in total
Scala was 22,000 to 48,000 LOC to implement
http://lambda-the-ultimate.org/node/1233#comment-13870
Typical programmer will average around 30 delivered, production read LOC per day.
Considering that I might be above average, and considering I might work 50% longer per day, I am looking at on the order of 7 to 26 months to complete Copute's compiler.
Typical programmer will average around 30 delivered, production read LOC per day.
Considering that I might be above average, and considering I might work 50% longer per day, I am looking at on the order of 7 to 26 months to complete Copute's compiler.
Tax strategy
http://esr.ibiblio.org/?p=2931#comment-296156
>I don’t think a “forgone license fee” counts as a loss under any accounting system.
Yeah, there’s no way the IRS would buy that. MS has been able to get massive write-offs by giving away copies of their products to schools and charities, but I don’t think NOK counts as a charity yet.
Lower middleman cost "App Store" is needed?
I think my business model for Copute might be needed by the market from the perspective of both the developers and the users (customers).
http://esr.ibiblio.org/?p=2931#comment-296323
http://esr.ibiblio.org/?p=2931#comment-296332
http://esr.ibiblio.org/?p=2931#comment-296341
http://esr.ibiblio.org/?p=2931#comment-296345
http://esr.ibiblio.org/?p=2931#comment-296365
Wow! So that means a granular contribution monetization model might be much more realistic way to be involved for programmers! Yeah!!!!
http://esr.ibiblio.org/?p=2931#comment-296323
Jacob Hallén Says:
February 12th, 2011 at 11:12 am
With Symbian the customer is owned by the operator. For an app to work (without tons of tedious warnings) , it needs to have a certificate signed by the operator. In exceptional circumstances you can get your certificate signed by the phone manufacturer, but most of the time this doesn’t happen, because the phone manufacturer wants to stay buddies with the operator and get him to subsidise the manufacturers phones.
As an app developer you are stuck in certification hell. It is tedious and damned expensive.
Enter the IPhone. The customer is owned by Apple. They will lease the customer to you on fairly decent terms. You no longer need to deal with hundreds of operators or manufacturers that see you as a threat to their business. This is a large part of the success of Apple in the market. It beats the pants off the Symbian model, because users can get the apps they want with any operator they care to use and the developers have one target to develop for. However, Apple charges a high price for the convenience.
Google sees this and is also worried about being kept out of the walled garden with their ads. They put Android on the market, making it free and making the apps self certified. This casts the users free from the control of both operators and handset manufacturers. It makes the devlopers happy, because they are no longer under the control of Apple.
http://esr.ibiblio.org/?p=2931#comment-296332
# Some Guy Says:
February 12th, 2011 at 12:27 pm
> Apple charges a high price for the convenience.
I wouldn’t call it a high price at all. 30% is quite typical for what we had to pay to distributors back in the days of software in boxes on retail shelves, and we don’t have to deal with the cost of implementing a payment system, etc.
http://esr.ibiblio.org/?p=2931#comment-296341
# Morgan Greywolf Says:
February 12th, 2011 at 2:01 pmI wouldn’t call it a high price at all. 30% is quite typical for what we had to pay to distributors back in the days of software in boxes on retail shelves, and we don’t have to deal with the cost of implementing a payment system, etc.
But we no longer live in a world of boxed software. These days most of us who do buy software do so online. That means the only middle men that are of any consequence are the credit card processors and the transaction clearinghouses. Why do we need Apple? Oh, I forgot, because they make us.
http://esr.ibiblio.org/?p=2931#comment-296345
# tmoney Says:
February 12th, 2011 at 4:05 pm
>Why do we need Apple? Oh, I forgot, because they make us.
And the examples of independent Android developers making their millions without using Android market place are…
http://esr.ibiblio.org/?p=2931#comment-296365
@Some Guy:
Certainly, there are examples of people getting rich off Apple’s app store, just like there are people getting rich from football, from singing, etc.
A lot of people make the assumption that paid apps are the way to go, because it’s easier to make good money with them than via advertising. That makes sense if the average developer can, in fact, make good money on the average app through Apple’s store.
I haven’t checked out this guy’s assumptions or math or numbers, but he goes into great detail, and concludes that the median paid app in Apple’s store earns $682 for its developer.
Wow! So that means a granular contribution monetization model might be much more realistic way to be involved for programmers! Yeah!!!!
Last edited by Shelby on Sun Feb 13, 2011 6:50 am; edited 1 time in total
Decided to code the Copute compiler in Scala instead of HaXe
Scala is a superset of Copute's planned feature set, except for pure functions:
http://copute.com/dev/docs/Copute/ref/intro.html#Scala
Which means it will drastically simplify the first iteration of the Copute compiler, because I can just do a literal AST translation of the grammar and output Scala code, with no post parsing analysis needed. Meaning I don't have to do any semantic analysis, just do a surjective mapping of the AST, sort of akin to a regex search and replace. This should mean I can have something working within about 3 months, assuming I don't take any time off.
Whereas, the HaXe compiler can't do numerous things, so I would have to do considerable semantic analysis to output HaXe code:
http://copute.com/dev/docs/Copute/ref/intro.html#HaXe
Also if I write the initial compiler code in Scala, I will have all the planned features of Copute (except pure functions) at my disposal, and then translating that code back to Copute code later, will also be a simple regex search and replace. Coding in HaXe, although benefits from a more familiar C-like syntax, will be burdened by the critical features missing in HaXe.
Also HaXe has no debugger, and Scala has at least three integrated Java development environments with debuggers:
http://www.scala-lang.org/node/91#ide_plugins
Not having a debugger makes working in HaXe impractical.
Also if Copute outputs Scala code as one of its output targets, then it means Copute also has a debugger. Although it won't be debugging in native Copute, at least it will be in the surjective Scala code.
And Java is the main language of Google Android (and smartphones just passed PCs in global unit sales in Q4 2010!), and so Scala runs on any platform that runs Java. So then Copute would too! Imagine Copute could become the most popular way to code Andriod applications.
So then what is the point of creating Copute, if Scala has everything except pure functions?
Several reasons:
http://copute.com/dev/docs/Copute/ref/intro.html#Scala
Which means it will drastically simplify the first iteration of the Copute compiler, because I can just do a literal AST translation of the grammar and output Scala code, with no post parsing analysis needed. Meaning I don't have to do any semantic analysis, just do a surjective mapping of the AST, sort of akin to a regex search and replace. This should mean I can have something working within about 3 months, assuming I don't take any time off.
Whereas, the HaXe compiler can't do numerous things, so I would have to do considerable semantic analysis to output HaXe code:
http://copute.com/dev/docs/Copute/ref/intro.html#HaXe
Also if I write the initial compiler code in Scala, I will have all the planned features of Copute (except pure functions) at my disposal, and then translating that code back to Copute code later, will also be a simple regex search and replace. Coding in HaXe, although benefits from a more familiar C-like syntax, will be burdened by the critical features missing in HaXe.
Also HaXe has no debugger, and Scala has at least three integrated Java development environments with debuggers:
http://www.scala-lang.org/node/91#ide_plugins
Not having a debugger makes working in HaXe impractical.
Also if Copute outputs Scala code as one of its output targets, then it means Copute also has a debugger. Although it won't be debugging in native Copute, at least it will be in the surjective Scala code.
And Java is the main language of Google Android (and smartphones just passed PCs in global unit sales in Q4 2010!), and so Scala runs on any platform that runs Java. So then Copute would too! Imagine Copute could become the most popular way to code Andriod applications.
So then what is the point of creating Copute, if Scala has everything except pure functions?
Several reasons:
- Copute more optimized type erasure.
- Pure functions (from Haskell) are the critical feature needed for widescale contribution composability & reuse, as well as concurrency for more CPU cores coming.
- Most programmers can not wrap their mind around Scala (and the better ones need months to grasp it). I was able to grasp Scala in a fews days, but I am apparently far above average and have a lot of knowledge of Haskell, HaXe, etc.. So Copute's syntax will be more familiar (C-like) and it will remove some of the unnecessary generality that makes Scala obtuse, yet retain all the necessary power of expression. So as one market (of several I forsee), I am viewing Copute as a faster way to write Scala code!
- Scala has the ability to throw and catch an exception. I am tentatively trying to eliminate even throwing an exception in Copute, except in a dynamic function.
- Dynamic coding is extremely popular and it will require massive boilerplate in Scala. Dynamic coding in Copute will be the same (thus concise) as the most popular language in world JavaScript.
- Scala can mix implementation in an interface (called "trait" in Scala), which causes semantic errors. I have disallowed this in Copute's inheritance model.
- Scala allows structural subtyping, which I have disallowed in Copute, because it can cause semantic errors.
- Scala allows "override" runtime virtual inheritance, which can cause semantic errors.
- Scala allows abstract and higher-kind types (as well as other things such as hiding closures in Unit type arguments, etc), that make reading Scala code like trying to read treasure map. Copute code is more transparent and straightforward, without losing necessary capabilities.
- The intent is for Copute to target more languages, e.g. JavaScript, PHP, Python, etc.. Copute will be able to do this more easily because it doesn't have those unnecessary, academic super generality types that Scala has. Scala appears to be tied to the Java (and maybe Microsoft .Net) platforms for the time being.
- Copute has an EBNF defined LL(3) context-free grammar (CFG) already completed, so easier for others to write Copute tools. I am not sure if Scala has a BNF CFG specification (I need to research this).
- K.I.S.S.
- If Copute's compiler code will be simpler than Scala's (which it should be, since less unnecessary generality), then I should be able to innovate faster than Scala with more contributors, as they will be able to more readily understand the code.
- there are probably more reasons....
Last edited by Shelby on Sat Aug 06, 2011 1:17 pm; edited 4 times in total
Type erasure versus reified generics
Here follows the Wikipedia comparison for Java's type erasure versus Microsoft C#'s reification, which I have expanded to include Scala's improvements over Java, and Copute's proposed method to obtain full reification via various planned output targets. The rows from the Wikipedia table which seemed unimportant, appear as empty rows below.
Btw, I find this summary of differences between Java and Scala, very useful.
So far, it seems to me that type erasure is more efficient than reification, because it does not require unnecessary multiple runtime versions of generic code, and that the problems with type erasure are because the JVM (Java Virtual Machine) has poorly understood "use-site" variance declaration (versus Copute and Scala's "definitions-site" site variance declaration), and JVM erases the order and number of type parameters and the bounds of the type parameters, so there is much unnecessary casting (i.e. reflection) and other problems.
Btw, I find this summary of differences between Java and Scala, very useful.
So far, it seems to me that type erasure is more efficient than reification, because it does not require unnecessary multiple runtime versions of generic code, and that the problems with type erasure are because the JVM (Java Virtual Machine) has poorly understood "use-site" variance declaration (versus Copute and Scala's "definitions-site" site variance declaration), and JVM erases the order and number of type parameters and the bounds of the type parameters, so there is much unnecessary casting (i.e. reflection) and other problems.
Java | Microsoft C# | Scala | Copute |
Type checks and downcasts are (implicitly) injected into client code (the code referencing the generics). Compared to non-generic code with manual casts, these casts will be the same[11]. But compared to compile-time verified code which would not need runtime casts and checks, these operations represent a performance overhead. Note such casts are not injected when the type parameter has a bound (the type that is erased to) which is the expected type. | C#/.NET generics guarantees type-safety and is verified at compile time and no extra checks/casts are necessary at runtime. Hence, generic code will run faster than non-generic code (and type-erased code) which require casts when handling non-generic or type-erased objects. | Scala implicitly injected these casts whether it runs on virtual machine is generics ignorant (i.e. requires type erasure), e.g. Java Virtual Machine (JVM), or on a virtual machine which has reification, e.g. Microsoft .Net | Copute is a chameleon and these casts will only be implicitly injected when type erasure is required by the output target. And note that dynamically typed language output targets such as JavaScript and Python, do not need casts (although they suffer speed because they do hashtable lookup for each object member access). |
Cannot use primitive types as type parameters; instead the developer must use the wrapper type corresponding to the primitive type. This incurs extra performance overhead by requiring boxing and unboxing conversions as well a memory and garbage collection pressure because the wrappers will be heap allocated as opposed to stack allocated. | Primitive and value types are allowed as type parameters in generic realizations. At runtime code will be synthesized and compiled for each unique combination of type parameters upon first use. Generics which are realized with primitive/value type do not require boxing/unboxing conversions. | Even though all primitive types are classes in Scala, these are implemented as primitive types where possible in the JVM["Step 8"]. Thus, due to type erasure Scala suffers the same boxing performance penalty as Java when a primitive type is used as a type parameter. However, there exist libraries and experimental compiler features to help with optimization but they introduce boilerplate tsuris. | Copute is a chameleon and will at least achieve at least the performance of its output target. Additionally, Copute may apply implicit optimizations (that require no boilerplate) to improve performance on output targets that otherwise employ boxing. |
Generic exceptions are not allowed[12] and a type parameter cannot be used in a catch clause[13]. | Can both define generic exceptions and use those in catch clauses. | ? | Copute is discouraging, or hopefully eliminating, user code exceptions. Exceptions may be generated implicitly by primitive types, e.g. integer overflow. |
Static members are shared across all generic realizations[14] (during type erasure all realizations are folded into a single class). Note this is only a problem for mutable static members, e.g. those not final or a method. Note that it is not allowed to make a static member which is not a function (nor method) and has the type of, or that is parametrized by, a class type parameter, because these would present a type mismatch between two generic realizations. | Static members are separate for each generic realization. A generic realization is a unique class. | Scala's singleton object is shared across all type parameter realizations of the companion class, and it can not be type parametrized, but each method of a singleton object can be orthogonally parametrized (see next row of this table), which is essentially what is needed. Thus this is only a problem for mutable members of object, e.g. those that are var (not val or def), because they can not be type parametrized. | Although it is difficult to think of a case where there is benefit when static members are separate for each type parameter realization (why is a list of string differentiated from a list of integer for a static?), Copute could achieve this on output targets which use type erasure by creating a unique singleton object for each realization, but only necessary for those that have mutable static members. It is not yet decided if there are use cases that justify this, since afaics it is only necessary for a static member which is not a function (nor method) and has the type of, or that is parametrized by, a type parameter, because these would otherwise present a type mismatch between two generic realizations, but mutable statics are bad design. |
Type parameters cannot be used in declarations of static fields/methods or in definitions of static inner classes. | No restrictions on use of type parameters. | Each function (or method) in the singleton object can declare type parameters which are orthogonal to, or duplicates of, any type parameters of the companion class. Any companion class type parameters can be accessed, by inputting an instance of the companion class to the static function (or method), then calling a method in the companion class-- because there is no use for the concrete companion class type parameters without an instance of the companion class. As stated in prior row of this table, mutable static non-functions (a/k/a 'fields') can not be type parametrized, and they can't be allowed because they cause type mismatch between generic realizations (and are undesirable bad design any way). Inner classes are correctly orthogonally parametrized and the type parameters can be specified on each generic realization. | Ditto as per Scala, except this is done with 'static' keyword instead of singleton 'object' declaration. It would also be possible to simulate type parametrized mutable static non-functions (a/k/a 'fields'), per the prior row of this table, but probably undesirable. |
Cannot create an array where the component type is a generic realization (concrete parameterized type).
| A generic realization is a 1st class citizen and can be used as any other class; also an array component.
| "Scala does not know such a restriction, although it uses erasure."
| Copute does not have such restriction. |
Cannot create an array where the component type is a type parameter.
| Type parameters represent actual, discrete classes and can be used like any other type within the generic definition.
| "Scala does not know such a restriction, although it uses erasure."
| Copute does not have such restriction. |
There is no class literal for a concrete realization of a generic type. Thus instanceof and cast on a parametrized type are impossible.
| A generic realization is an actual class. | There is no class literal for a concrete realization of a type parameter. Thus isInstanceOf, asInstanceOf, and pattern matching a parametrized type are impossible.
| It is difficult to think of a case where there is a benefit to isInstanceOf, asInstanceOf, or pattern matching on a parametrized type, because it would be impossible to know a priori all the possible class literals to expect and this would not be checked at compile-time. An equivalent functionality can be checked at compiled-time by employing function overloading, which is compatible with output targets that do type erasure. Although isInstanceOf and pattern matching could be provided by storing reflection data in each instance, or otherwise providing reflection, it is the implementation of asInstanceOf which would be more complex as it would require casts on every use of the erased type parameter, e.g. T as String erased to Object. |
instanceof is not allowed with type parameters or concrete generic realizations. | The is and as operators work the same for type parameters as for any other type. | See prior row of this table. | See prior row of this table. |
Cannot create new instances using a type parameter as the type. | With a constructor constraint, generic methods or methods of generic classes can create instances of classes which have default constructors. | Cannot create new instances using a type parameter as the type. | It is poor design to instantiate explicit class literals (i.e. use new), so proper design will have all types implement a static type parametrized factory interface, then any type parameter which is upper bound on that interface, can be instantiated by calling the interface's factory method. This boilerplate could be automated in Copute for every concrete class that has an implicit (i.e. no explicit) or constructor without formal parameters. The concrete class could override factory method to provide custom behavior. Such a solution works with output targets that do type erasure. |
The unbounded type(s) of the parametrized type are erased during compilation. Special extensions to reflection must be used to discover the original type. | Type information about C# generic types is fully preserved at runtime, and allows complete reflection support as well as instantiation of generic types. | The unbounded type(s) of the parametrized type are erased during compilation. | It is difficult to think of a case of benefit from reflection for reading the unbounded type(s) of a parametrized type. It could be done if we can think of a justified use case. |
Last edited by Shelby on Wed Feb 16, 2011 11:23 am; edited 28 times in total
Proving the Halting Problem, by Dr. Suess
The following is relevant to Copute, but it is also relevant to everything, including all science, religion, philosophy, and any search for truth and knowledge.
Recursion is a key feature of a Turing complete machine (i.e. a computer).
Russell's Paradox: there is no rule for a set that does not cause it to
contain itself, thus all sets are infinitely recursive.
The is also a feature of Russell's Paradox, which says that any set also contains itself (that nothing can be non-recursive, any such false barrier, will be brittle and fail due to Coase's Theorem).
And follows all the other theorems which all derive and relative to the 2nd law of thermodynamics which says the universe is always trending to maximum disorder (i.e. maximum possibilities):
* Liskov Substition Principle: it is an undecidable problem that subsets
inherit.
* Linsky Referencing: it is undecidable what something is when it is
described or perceived.
* Coase Theorem: there is no external reference point, any such barrier
will fail.
* Godel's Theorem: any formal theory, in which all arithmetic truths can
be proved, is inconsistent.
* 1856 Thermo Law: entire universe (a closed system, i.e. everything)
trends to maximum disorder.
http://ebiquity.umbc.edu/blogger/2008/01/19/how-dr-suess-would-prove-the-halting-problem-undecidable/
Scooping the Loop Snooper
an elementary proof of the undecidability of the halting problem
Geoffrey K. Pullum, University of Edinburgh
No program can say what another will do.
Now, I won’t just assert that, I’ll prove it to you:
I will prove that although you might work til you drop,
you can’t predict whether a program will stop.
Imagine we have a procedure called P
that will snoop in the source code of programs to see
there aren’t infinite loops that go round and around;
and P prints the word “Fine!” if no looping is found.
You feed in your code, and the input it needs,
and then P takes them both and it studies and reads
and computes whether things will all end as they should
(as opposed to going loopy the way that they could).
Well, the truth is that P cannot possibly be,
because if you wrote it and gave it to me,
I could use it to set up a logical bind
that would shatter your reason and scramble your mind.
Here’s the trick I would use – and it’s simple to do.
I’d define a procedure – we’ll name the thing Q -
that would take any program and call P (of course!)
to tell if it looped, by reading the source;
And if so, Q would simply print “Loop!” and then stop;
but if no, Q would go right back to the top,
and start off again, looping endlessly back,
til the universe dies and is frozen and black.
And this program called Q wouldn’t stay on the shelf;
I would run it, and (fiendishly) feed it itself.
What behaviour results when I do this with Q?
When it reads its own source, just what will it do?
If P warns of loops, Q will print “Loop!” and quit;
yet P is supposed to speak truly of it.
So if Q’s going to quit, then P should say, “Fine!” -
which will make Q go back to its very first line!
No matter what P would have done, Q will scoop it:
Q uses P’s output to make P look stupid.
If P gets things right then it lies in its tooth;
and if it speaks falsely, it’s telling the truth!
I’ve created a paradox, neat as can be -
and simply by using your putative P.
When you assumed P you stepped into a snare;
Your assumptions have led you right into my lair.
So, how to escape from this logical mess?
I don’t have to tell you; I’m sure you can guess.
By reductio, there cannot possibly be
a procedure that acts like the mythical P.
You can never discover mechanical means
for predicting the acts of computing machines.
It’s something that cannot be done. So we users
must find our own bugs; our computers are losers!
================
Let me make the proof more simple for you. If I write a function that returns either "Stops" or "Does not stop" when it analyzes any other function. Now I have my function call that function, so that function analyzes my function and returns "Does not stop", because there is an infinite loop.
The point is that any function can be called. The set of functions always contains itself.
Recursion is a key feature of a Turing complete machine (i.e. a computer).
Russell's Paradox: there is no rule for a set that does not cause it to
contain itself, thus all sets are infinitely recursive.
The is also a feature of Russell's Paradox, which says that any set also contains itself (that nothing can be non-recursive, any such false barrier, will be brittle and fail due to Coase's Theorem).
And follows all the other theorems which all derive and relative to the 2nd law of thermodynamics which says the universe is always trending to maximum disorder (i.e. maximum possibilities):
* Liskov Substition Principle: it is an undecidable problem that subsets
inherit.
* Linsky Referencing: it is undecidable what something is when it is
described or perceived.
* Coase Theorem: there is no external reference point, any such barrier
will fail.
* Godel's Theorem: any formal theory, in which all arithmetic truths can
be proved, is inconsistent.
* 1856 Thermo Law: entire universe (a closed system, i.e. everything)
trends to maximum disorder.
http://ebiquity.umbc.edu/blogger/2008/01/19/how-dr-suess-would-prove-the-halting-problem-undecidable/
Scooping the Loop Snooper
an elementary proof of the undecidability of the halting problem
Geoffrey K. Pullum, University of Edinburgh
No program can say what another will do.
Now, I won’t just assert that, I’ll prove it to you:
I will prove that although you might work til you drop,
you can’t predict whether a program will stop.
Imagine we have a procedure called P
that will snoop in the source code of programs to see
there aren’t infinite loops that go round and around;
and P prints the word “Fine!” if no looping is found.
You feed in your code, and the input it needs,
and then P takes them both and it studies and reads
and computes whether things will all end as they should
(as opposed to going loopy the way that they could).
Well, the truth is that P cannot possibly be,
because if you wrote it and gave it to me,
I could use it to set up a logical bind
that would shatter your reason and scramble your mind.
Here’s the trick I would use – and it’s simple to do.
I’d define a procedure – we’ll name the thing Q -
that would take any program and call P (of course!)
to tell if it looped, by reading the source;
And if so, Q would simply print “Loop!” and then stop;
but if no, Q would go right back to the top,
and start off again, looping endlessly back,
til the universe dies and is frozen and black.
And this program called Q wouldn’t stay on the shelf;
I would run it, and (fiendishly) feed it itself.
What behaviour results when I do this with Q?
When it reads its own source, just what will it do?
If P warns of loops, Q will print “Loop!” and quit;
yet P is supposed to speak truly of it.
So if Q’s going to quit, then P should say, “Fine!” -
which will make Q go back to its very first line!
No matter what P would have done, Q will scoop it:
Q uses P’s output to make P look stupid.
If P gets things right then it lies in its tooth;
and if it speaks falsely, it’s telling the truth!
I’ve created a paradox, neat as can be -
and simply by using your putative P.
When you assumed P you stepped into a snare;
Your assumptions have led you right into my lair.
So, how to escape from this logical mess?
I don’t have to tell you; I’m sure you can guess.
By reductio, there cannot possibly be
a procedure that acts like the mythical P.
You can never discover mechanical means
for predicting the acts of computing machines.
It’s something that cannot be done. So we users
must find our own bugs; our computers are losers!
================
Let me make the proof more simple for you. If I write a function that returns either "Stops" or "Does not stop" when it analyzes any other function. Now I have my function call that function, so that function analyzes my function and returns "Does not stop", because there is an infinite loop.
The point is that any function can be called. The set of functions always contains itself.
Mixins are just single-inheritance folding
I wrote some public comments today
http://www.codecommit.com/blog/scala/scala-for-java-refugees-part-5/comment-page-1#comment-5284
http://www.codecommit.com/blog/scala/scala-for-java-refugees-part-5/comment-page-1#comment-5284
#
@Cedric
You missed the point of mixins. They completely solve the diamond problem because only duplicate methods which are from the EXACT SAME trait are resolved automatically. Any duplicates from different traits have to be resolved by the superclass in the single-inheritance chain (i.e. analogous to “Class::” in C++).
You can’t do “Class::” specific method call from a trait (you must use the less specific “super.” instead), this because mixin order can insert an mixin in middle of the inheritance hierarchy of another mixin. For example, given mixin A (B) and mixin C (B), then mixin D (C,A) would effectively cause mixin C (A,B), because the B in mixin C (B) is already in the inheritance tree of mixin A (B), and thus not repeated. A more detailed explanation can be found at top, left of page 7 in the Super subsection of the 2.2 Modular Mixin Composition section of Scalable Component Abstractions, Odersky & Zenger, Proceedings of OOPSLA 2005, San Diego, October 2005.
http://www.scala-lang.org/sites/default/files/odersky/ScalableComponent.pdf
So I want you to realized that “Class::” specific method invocation can never be as flexible as mixins.
I studied this deeply for the design of my Copute language.
Shelby Moore III Sunday, February 13, 2011 at 2:12 am
#
typo: I meant “…have to resolved by the subclass…”
Shelby Moore III Sunday, February 13, 2011 at 2:14 am
#
@Cedric
Don’t conflate some programmer’s insufficient separation-of-concerns in the design of a trait with the Diamond Problem.The Diamond Problem is a specific problem of how to resolve multiple inheritance name spaces (scope).
Also please realize that Scala is a single-inheritance language, with the apparency of multiple inheritance of traits. In reality, traits get folded into that single-inheritance chain in a specific order and are NOT multiply inherited.
Shelby Moore III Sunday, February 13, 2011 at 2:24 am
Constructors are considered harmful
http://gbracha.blogspot.com/2007/06/constructors-considered-harmful.html
Shelby added a comment:
I have refined the solution since I wrote the above. Above I was proposing that static factories in an interface could have their return type parametrized to the implemented subtype class. That does not solve the problem. Static factories in an interface are necessary for other SPOT (single-point-of-truth) and boilerplate elimination reasons, e.g. see my implementation of 'wrap' (a/k/a 'unit') in an IMonad (IApplicative), and notice how much more elegant it is than the equivalent Scalaz. Note SPOT is also a critical requirement for maximizing modularity, i.e. ease of composition and reuse.
Rather to accomplish abstraction of constructors, we need is to nudge programmers to input factory functions, so that any code can be abstracted over another subtype of an 'interface' (i.e. instead of calling a 'class' constuctor directly, then input a factory function which returns the result of calling a constructor, thus the caller can change the subtype being constructed). So the important point is that we want to force programmers to create an 'interface'(s) for all their 'class' methods, which is accomplished by not allowing method implementation (i.e. 'class' nor 'mixin') to be referenced any where except in the 'inherits' declaration and constructor call. This means the type of an instance reference can not contain an identifier which is a 'class' nor 'mixin', thus forcing the type of all instance references to contain identifiers which are an 'interface', i.e. instance references reveal the abstract interface, but do not indicate the implementation.
So Copute will have a crucial difference from Scala and the other contenders (e.g. Ceylon), in that 'interface' and 'mixin' will be separated (not conflated in a 'trait'), and only 'interface' can appear in the type of instance references. Note that in Copute (just like for a 'trait' in Scala) 'mixin' and 'interface' may not have a constructor. Scala's linearised form of multiple inheritance is retained.
Note this unique feature of Copute (along with the elimination of virtual methods, a/k/a 'override') is also necessary to enforce the Liskov Substitution Principle (which relates to the concepts of covariance and contravariance):
(note some of the following specification is out-of-date with current specification in grammar and in my various notes around)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
http://copute.com/dev/docs/Copute/ref/class.html#Static_Duck_Typing
http://copute.com/dev/docs/Copute/ref/function.html#Overloading
Here are some more relevant thoughts from Gilad Bracha:
http://gbracha.blogspot.com/2008/02/cutting-out-static.html?showComment=1221878100000#c3542743084882598768
Note, Gilad Bracha helped write the Java specification.
See also this:
http://en.wikipedia.org/wiki/Hollywood_principle
Shelby added a comment:
Gilad is making the correct point that in best design practice composable modules (i.e. APIs) should expose abstract interfaces but not their concrete classes. Thus new becomes impossible, because there is no concrete class to instantiate. Apparently Gosling realized this.
Static factories in abstract interfaces can accomplish this with type parametrization.Gilad Bracha wrote:The standard recommended solution is to use a static factory [...] You can’t abstract over static methods
Static methods can be abstracted (over inheritance) with type parametrization, e.g.
- Code:
interface Factory<+Subclass extends Factory<Subclass>>
{
newInstance() : Subclass
}
where the + declares that Factory can be referenced (assigned) covariantly due to Liskov Substitution Principle. The + is unnecessary in the above example, but exists in the syntax generally to perform checking against LSP.
Thus any API can abstract over any publicly exposed Factory<T> by associating them privately with instances of Factory<IheritsFromT>. In other words, our interface could have been as follows, where the factory creates a new instance of itself, which can be any subtype because inherited return types may be covariant.
- Code:
interface Factory
{
newInstance() : Factory
}
On the topic of concrete implementation inheritance and constructors, the Scala-like mixins can not handle this, because external parameters for constructors are not allowed in a trait implementation. Thus each mixin is detached from the external inheritance order. I have taken this a step further in my design for Copute, because I force the separation of purely abstract interface and purely concrete mixin implementation, thus every mixin has to inherit from an abstract interface and can only be referenced as a type via that interface, i.e. concrete mixins are not types in any scope other than the mixin declaration.
I have refined the solution since I wrote the above. Above I was proposing that static factories in an interface could have their return type parametrized to the implemented subtype class. That does not solve the problem. Static factories in an interface are necessary for other SPOT (single-point-of-truth) and boilerplate elimination reasons, e.g. see my implementation of 'wrap' (a/k/a 'unit') in an IMonad (IApplicative), and notice how much more elegant it is than the equivalent Scalaz. Note SPOT is also a critical requirement for maximizing modularity, i.e. ease of composition and reuse.
Rather to accomplish abstraction of constructors, we need is to nudge programmers to input factory functions, so that any code can be abstracted over another subtype of an 'interface' (i.e. instead of calling a 'class' constuctor directly, then input a factory function which returns the result of calling a constructor, thus the caller can change the subtype being constructed). So the important point is that we want to force programmers to create an 'interface'(s) for all their 'class' methods, which is accomplished by not allowing method implementation (i.e. 'class' nor 'mixin') to be referenced any where except in the 'inherits' declaration and constructor call. This means the type of an instance reference can not contain an identifier which is a 'class' nor 'mixin', thus forcing the type of all instance references to contain identifiers which are an 'interface', i.e. instance references reveal the abstract interface, but do not indicate the implementation.
So Copute will have a crucial difference from Scala and the other contenders (e.g. Ceylon), in that 'interface' and 'mixin' will be separated (not conflated in a 'trait'), and only 'interface' can appear in the type of instance references. Note that in Copute (just like for a 'trait' in Scala) 'mixin' and 'interface' may not have a constructor. Scala's linearised form of multiple inheritance is retained.
Note this unique feature of Copute (along with the elimination of virtual methods, a/k/a 'override') is also necessary to enforce the Liskov Substitution Principle (which relates to the concepts of covariance and contravariance):
(note some of the following specification is out-of-date with current specification in grammar and in my various notes around)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
http://copute.com/dev/docs/Copute/ref/class.html#Static_Duck_Typing
http://copute.com/dev/docs/Copute/ref/function.html#Overloading
Here are some more relevant thoughts from Gilad Bracha:
http://gbracha.blogspot.com/2008/02/cutting-out-static.html?showComment=1221878100000#c3542743084882598768
A singleton is a simply unique object. In most languages, you can use the static state associated with a class to ensure it only has one instance, and make singletons that way. But this only works because the class itself is a singleton, and the system takes care of that for you by having a global namespace.
In Newspeak, there is no global namespace. If you need singletons in your application, they are simply instance variables of your module. When you load your application, you hook up your modules and make a single copy of each.
If, on the other hand, you need a service that's accessible to an open-ended set of users, it has to be available at some public place - this could be a URL on the internet (the real global state) or a platform wide registry. In other words, it's part of the outside world's state.
Such world state may be injected into your application when it starts up (but only in as much as the platform trusts you to access it).
Not sure if this helps. The habit of static state is pervasive in computing and it's hard for people to get rid of it - but we will.
Note, Gilad Bracha helped write the Java specification.
See also this:
http://en.wikipedia.org/wiki/Hollywood_principle
Last edited by Shelby on Tue Jun 21, 2011 8:21 am; edited 9 times in total
re: The Smartphone Wars: Nokia shareholders revolt!
http://esr.ibiblio.org/?p=2961&cpage=1#comment-296658
Shelby wrote (as "Imbecile" but not the same "Imbecile" who posted other comments):
Shelby wrote (as "Imbecile" but not the same "Imbecile" who posted other comments):
Nokia must focus on its inherent strengths, which is as a refinement innovator, not a paradigm innovator a la Steve Jobs. Microsoft will slow them down, because Microsoft is strong in neither, and Windows Phone is too far behind in a race of exponential rate of innovation. Due to the exponential function, it is too late for anyone to go back and start a new smartphone OS from scratch or even take time to complete an unfinished one, unless it will offer massive compelling advantages, which is probably unrealistic. The realistic forward innovations are precisely in Nokia's area of strength. By the end of 2011, the smartphone marketshare of Nokia will have eroded to teens and Android will be triple that.
Nokia should innovate on Android so they can ship a #1 selling smartphone in 2011, and incrementally differentiate itself from the herd. It is potentially possible to co-opt Google with strategic innovations that diverge from the herd's common base. The Android platform is inherently fractured, as this is the desirable nature of open source. The opportunity is wide-open for Nokia to provide an unfractured Android platform. Popular innovations will eventually make their way back into the common base, but always on a lag-- look to Apple as a model or profitability as first-innovator.
There is no credible AppStore or iTunes on Android on the horizon. The opportunity to take the best of Android, win the race to market, and innovate are wide open. Do not fight against the exponential function. Embrace the strengths of open source, and your own strengths with respect to it-- this advice applies to everyone. Dinosaur's stand in the way of open source. Re-inventing Android as MeeGo at this stage is an enormous waste of capital, and the free market does not reward those who do not focus capital on their relative strengths. MeeGo is yet another coffin in the European culture cementary of "politics 90% of the time, to get 10% production". The institutional investors are correct that "American" (libertarian) culture of "Just Do It" wins, but the Elop and Microsoft selection are not even shadows of that.
Last edited by Shelby on Sat Mar 05, 2011 6:05 pm; edited 1 time in total
Why I should work on Copute, even if I never earn a penny
Net worth is overrated.
Accepting what is.
Note this was recorded without any forethought, just stream of thought while I am in deep in programming just a few moments ago...
http://coolpage.com/accepting.mp3
The recording is my biblical insight into contentment.
I knew I was destined to be poor, even since I learned to love beans & rice. Wealth is not an indicator of success. It is better to have tried and failed, than to have wasted a life on "the highest ROI" as measured in gold & silver or any other metric of money.
Accepting what is.
Note this was recorded without any forethought, just stream of thought while I am in deep in programming just a few moments ago...
http://coolpage.com/accepting.mp3
The recording is my biblical insight into contentment.
I knew I was destined to be poor, even since I learned to love beans & rice. Wealth is not an indicator of success. It is better to have tried and failed, than to have wasted a life on "the highest ROI" as measured in gold & silver or any other metric of money.
Tension between CR and CTR for advertising!
Aha!
I remember from my high ($5000+ per month) PPC advertising spending back in 2000-2005:
http://www.ppcsummit.com/newsletter/pay-per-click/ad-copy-isnt-just-text/
The problem is you can't always get them both to optimize together.
This is why paid advertising is not always optimum model for maximizing knowledge and prosperity. The CR (coversion ratio of visitors to sales) is what matters most for maximizing knowledge and prosperity. The CTR (the click through ratio of clicks to views of ads) is what Google needs to maximize their revenue.
I have realized that the way to maximize CR is if users could compete to suggest sites they like in the context of other sites, with small writeups. Sort of like blogging on another webpage, e.g. I could blog on Hommel's site or cnn.com, etc. The visitor would decide if they want to view these suggestions. They would be ranked by CTR, but realize that then the CR would always be maximized because visitors wouldn't cost anything (no Google and source site charges) and be optimized according to where the CTR is most effective and with an potentially a different ad copy for maximizing CTR for each possible site where the "ad" could be viewed, custom made by the users.
This would totally change the web, because ad sponsored sites would wither away. Knowledge sites and needed products would prosper more, and more efficiently with less waste and middle men.
Yeah I think this would be great. It would drastically shrink Google's future.Tension between CR and CTR for advertising!
I am coming after those sites which are wasting their asset, with technology paradigm shifts that put the power in the hands of the readers to vote on what is most relevant.
I remember from my high ($5000+ per month) PPC advertising spending back in 2000-2005:
http://www.ppcsummit.com/newsletter/pay-per-click/ad-copy-isnt-just-text/
While it is never a good idea to optimize ad text exclusively to CTR, if you can maintain or improve your conversion rate (CR) while also increasing CTR, you need to do so.
The problem is you can't always get them both to optimize together.
This is why paid advertising is not always optimum model for maximizing knowledge and prosperity. The CR (coversion ratio of visitors to sales) is what matters most for maximizing knowledge and prosperity. The CTR (the click through ratio of clicks to views of ads) is what Google needs to maximize their revenue.
I have realized that the way to maximize CR is if users could compete to suggest sites they like in the context of other sites, with small writeups. Sort of like blogging on another webpage, e.g. I could blog on Hommel's site or cnn.com, etc. The visitor would decide if they want to view these suggestions. They would be ranked by CTR, but realize that then the CR would always be maximized because visitors wouldn't cost anything (no Google and source site charges) and be optimized according to where the CTR is most effective and with an potentially a different ad copy for maximizing CTR for each possible site where the "ad" could be viewed, custom made by the users.
This would totally change the web, because ad sponsored sites would wither away. Knowledge sites and needed products would prosper more, and more efficiently with less waste and middle men.
Yeah I think this would be great. It would drastically shrink Google's future.Tension between CR and CTR for advertising!
I am coming after those sites which are wasting their asset, with technology paradigm shifts that put the power in the hands of the readers to vote on what is most relevant.
Category Theory is critical to understanding functional programming deeply
The best explanation I have found, which is comprehensible to someone (like me) without a master's degree in category theory, is "Comprehending Monads" by Wadler.
You can Google for it, there is a PDF online.
I am on page 8, and the first 7 pages were very well written and I was able to digest them in about 1 hour, and I can say so far I understand the first 7 pages very well and deeply/thoroughly (I think).
If you want to compare with a more abstract mathematical tutorial, here is a concise one:
http://www.patryshev.com/monad/m-c.html
Or overview:
http://homepages.inf.ed.ac.uk/jcheney/presentations/ct4d1.pdf
http://www.algorithm.com.au/downloads/talks/monads-are-not-scary/monads-are-not-scary-chak.pdf
Btw, Philip Wadler has been in past 2-3 decades, one of the most important researchers in the field of computer science:
http://homepages.inf.ed.ac.uk/wadler/vita.pdf
http://en.wikipedia.org/wiki/Philip_Wadler
You can Google for it, there is a PDF online.
I am on page 8, and the first 7 pages were very well written and I was able to digest them in about 1 hour, and I can say so far I understand the first 7 pages very well and deeply/thoroughly (I think).
If you want to compare with a more abstract mathematical tutorial, here is a concise one:
http://www.patryshev.com/monad/m-c.html
Or overview:
http://homepages.inf.ed.ac.uk/jcheney/presentations/ct4d1.pdf
http://www.algorithm.com.au/downloads/talks/monads-are-not-scary/monads-are-not-scary-chak.pdf
Btw, Philip Wadler has been in past 2-3 decades, one of the most important researchers in the field of computer science:
http://homepages.inf.ed.ac.uk/wadler/vita.pdf
http://en.wikipedia.org/wiki/Philip_Wadler
Last edited by Shelby on Mon Feb 21, 2011 5:07 pm; edited 3 times in total
Being Popular (computer language)
http://www.paulgraham.com/popular.html
Of course, hackers have to know about a language before they can use it. How are they to hear? From other hackers. But there has to be some initial group of hackers using the language for others even to hear about it. I wonder how large this group has to be; how many users make a critical mass? Off the top of my head, I'd say twenty. If a language had twenty separate users, meaning twenty users who decided on their own to use it, I'd consider it to be real.
Getting there can't be easy. I would not be surprised if it is harder to get from zero to twenty than from twenty to a thousand. The best way to get those initial twenty users is probably to use a trojan horse: to give people an application they want, which happens to be written in the new language.
Scala has critical defects; Copute will output to Scala w/o those defects
Copute will initially output to Scala, this is fastest way to get a debugger/IDE for free, and the mapping from Copute to Scala is very straightforward, so time-to-market should be on the order of 3 months. HaXe has faded away as potential target, both for lack of IDE and also missing critical features such as type parameter co-/contra-variance.
Scala (or maybe C#/.Net except Microsoft is dying) is currently the best hope for next mainstream OO+FP language.
Well I have finally gotten to the point where I think I can enumerate the critical things Copute will be able to do, that Scala apparently can not.
And apparently these affect the very ability to be abstract (i.e. reusable and composable), which is Scala main and mnemonic claim of superiority ("Scala is scalable").
http://copute.com/dev/docs/Copute/ref/intro.html#Scala
P.S. If you had read the Copute docs previously, there have been numerous egregious errors corrected hence. Also the quality of the docs has been significantly improved (although still need more improvement).
Scala (or maybe C#/.Net except Microsoft is dying) is currently the best hope for next mainstream OO+FP language.
Well I have finally gotten to the point where I think I can enumerate the critical things Copute will be able to do, that Scala apparently can not.
And apparently these affect the very ability to be abstract (i.e. reusable and composable), which is Scala main and mnemonic claim of superiority ("Scala is scalable").
http://copute.com/dev/docs/Copute/ref/intro.html#Scala
P.S. If you had read the Copute docs previously, there have been numerous egregious errors corrected hence. Also the quality of the docs has been significantly improved (although still need more improvement).
Android is the killer app?
http://esr.ibiblio.org/?p=2975&cpage=3#comment-297417
Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
http://esr.ibiblio.org/?p=2975&cpage=3#comment-297465
Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
Excuse if I'm not omniscient about such matters, so is the case that *nix conquered the server in 2008 on deadline, and Android is the killer client app? The OS became an app in a new abstraction named "cloud computing" and the network became the OS?
http://esr.ibiblio.org/?p=2975&cpage=3#comment-297465
Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
Excuse if I’m not omniscient about such matters, so is the case that *nix conquered the server in 2008 on deadline, and Android is the killer client app? The OS became an app in a new abstraction named “cloud computing” and the network became the OS?
No. *nix conquered the server long, long ago.
Granted *nix server had majority market share long ago.
ESR cited that 83/23 (78/22%) for new workloads occurred circa 2007, so perhaps the conquering was 90/10% rule (roughly Pareto squared) complete by 2008, thus meeting ESR's deadline.
Is Android the killer app because it paradigm-shifted open source hackers to optimize for hardware without a keyboard-- flatmapping the world closer to the programmers are the users and vice versa? Open source for the masses. On deadline for the roughly 10 year cycle for the imminent arrival of a new programming language for these masses:1975 he started using “structured programming” techniques in assembly language
1983 a new era dawned for him as he started doing some C programming
1994 when he started doing object-oriented programming in C++
2000, he made the switch to Java
Can Java be the language on Android, invalidating the 10 year cycle deadline? Will the next language be the virtual machine with a smorgasbord of grammars?
Tying this to the OODA and martial arts discussion, note that solving a problem by "mapping to a new context" or "passing through an envelope" abstraction, is a monad model and hence the mention of flatmap. Could the next programming language achieve monads-for-dummies?
Last edited by Shelby on Sat Mar 05, 2011 6:06 pm; edited 1 time in total
Scala's standard library may have fundamental semantic errors?
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5289
Shelby wrote:
Shelby wrote:
Perhaps I am missing something, but off top of my head, I am thinking the following is semantically incorrect, because it makes all None equivalent. (Also doesn’t this force Nothing to a supertype of every possible A, so that bind can always be called with the same function whether it is Some or a None?)
- Code:
case object None extends Option[Nothing] {
def bind[B](f: Nothing => Option[B]) = None
}
Is that the way it is implemented in Scala standard library? Seems to me that None should be parametrized too, so that a None for one type (e.g. String) isn’t equal to a None for another which is not covariant (e.g. Int).
- Code:
case class None[+A] extends Option[A] {
def bind[B](f: A => Option[B]) = None[B]
}
Static methods in interface; doing monads correctly for OOP
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5291
Shelby wrote:
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5333
Shelby wrote:
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5334
Note in Copute, Daniel's sequence would be coded:
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5347
Shelby wrote:
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5349
Shelby wrote:
Shelby wrote:
Caveat: none of the following code is tested and I am new to Scala and have never installed the Scala (nor the Java) compiler.
Daniel's "typeclass" is a fully generalized convention for declaring static methods of an interface. Imagine you could declare static methods in a trait with this pseudo-code.
- Code:
trait Monad[+X] {
static def unit[Y] : Y => Monad[Y]
def bind[M <: Monad[_]] : (X => M) => M
}
sealed trait Option[+X] extends Monad[X] {
static def unit[Y]( y : Y ) : Monad[Y] = Some( y )
}
To get legal Scala, this is translated as follows, noting the +, -, or no variance annotation on M depend on where Monad appears in the static methods of Monad.
- Code:
trait Monad[+X] {
def bind[M <: Monad[_]] : (X => M) => M
}
trait StaticMonad[+M[_]] {
def unit[Y] : Y => Monad[Y]
}
sealed trait Option[+X] extends Monad[X] {}
implicit object OptionStaticMonad extends StaticMonad[Option] {
def unit[Y]( y : Y ) : Monad[Y] = Some( y )
}
Before we can add the cases for Option, note that Monad requires "unit" to be invertible, i.e. bijective, but None has no inverse, so we need an injective monad.
- Code:
trait InjMonad[Sub[_] <: InjMonad[Sub[_],X], +X] {
def bind[Y] : (X => Sub[Y]) => Sub[Y]
}
sealed trait Option[+X] extends InjMonad[Option,X] {}
case class Some[+X](value: X) extends Option[X] {
def bind[Y]( f : X => Option[Y] ) : Option[Y] = f( value )
}
case class None[+X] extends Option[X] {
def bind[Y]( f : X => Option[Y] ) : Option[Y] = None[Y]
}
Thus Daniel's sequence.
- Code:
def sequence[M[_], X]( ms : List[M[X]], implicit tc : StaticMonad[M] ) = {
ms.foldRight( tc.unit( List[X] ) ) { (m, acc) =>
m.bind(_) { x =>
acc.bind(_) { tail => tc.unit( x :: tail ) }
}
}
}
Note that syntax is peculiar to Scala, here is a more widely readable version:
- Code:
def sequence[M[_], X]( ms : List[M[X]], implicit tc : StaticMonad[M] ) = {
ms.foldRight( tc.unit( List[X] ), (m, acc) =>
m.bind( x =>
acc.bind( tail => tc.unit( x :: tail ) )
)
)
}
Note my version of Daniels sequence will work with both bijective Monad and injective InjMonad, because the call to bind is a method of the instance; whereas, Daniel's version assumed the injective monad and I see no possible way to fix it using his convention of implicit duck typing of non-static methods. His is an example of how duck typing breaks composability.
==================
**** Monad Theory ****
==================
The best layman's explanation I have found so far is "Comprehending Monads" by Philip Wadler, 1992. Google for the PDF.
Conceptually a monad has three functions:
- Code:
unit : X -> M[X]
map : (X -> Y) -> M[X] -> M[Y]
join: M[M[X]] -> M[X]
The map function might be curried two ways:
- Code:
map : (X -> Y) -> (M[X] -> M[Y])
map : M[X] -> ((X -> Y) -> M[Y]) // Will use this for trait below
We must overload the map function, if M is not same type as N, because otherwise map will not know which "unit" to call (in order to lift Y => M[Y]), because overloading on return type is ambiguous due to covariance:
- Code:
map : (Y -> M[Y]) -> (X -> Y) -> N[X] -> M[Y]
bind : (X -> M[Y]) -> N[X] -> M[Y]
map a b = bind x -> a b x
The reason I rephrased the abstracted monad as a inherited trait with static methods, is so far in my research, I don't agree with a general "implicit" keyword for a language design, because the general use of duck typing can violate the localized single-point-of-truth (SPOT) and can make semantic assumptions that were not intended, because duck typing forces all traits and classes to share the same member namespace, and thus essentially bypasses the behavioral conditions of the Liskov Substitution Principle contract of OOP. Also, since duck typing does not explicitly state which interfaces are required at the SPOT of the trait or class declaration, there is no way to know which interfaces are available by looking in one place. Localization (separation) of concerns is a critical attribute of reusable/scalable software design. Again the following is pseudo-code for the translation of static methods to implicit, but now fully generalized to monad theory.
- Code:
trait Monad[+X] {
static def unit[Y] : Y => Monad[Y]
def bind[M <: Monad[_]] : (X => M) => M
def map[M <: Monad[Y], Y]( a : Y => M, b : X => Y ) : M = bind x => a b x // bind( x => a( b( x ) ) )
static def join[M <: Monad[_], Y] : M[M[Y]] => M[Y]
}
But the above trait won't work for monads whose "unit" is not bijective, i.e. where the inverse of "unit" is lossy, e.g. None option has no inverse. The injective monads thus know which "unit" to call, thus we could add a map to our prior injective monad, which does not input a "unit".
- Code:
trait InjMonad[Sub[_] <: InjMonad[Sub[_],X], +X] {
def bind[Y] : (X => Sub[Y]) => Sub[Y]
def map[Y] : (X => Y) => Sub[Y]
}
sealed trait Option[+X] extends InjMonad[Option,X] {}
case class Some[+X](value: X) extends Option[X] {
def bind[Y]( f : X => Option[Y] ) : Option[Y] = f( value )
def map[Y]( f : X => Y ) : Option[Y] = ObjectStaticMethod.unit( f( value ) ) // Some( f( value ) )
}
case class None[+X] extends Option[X] {
def bind[Y]( f : X => Option[Y] ) : Option[Y] = None[Y]
def map[Y]( f : X => Y ) : Option[Y] = None[Y]
}
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5333
Shelby wrote:
I will offer two improvements to my prior comment-- the prior comment wherein I had proposed a conceptual mapping of pseudo-code "static" interface members to legal Scala syntax.
Note that the StaticMonad trait (in my prior comment) is necessary to enable accessing "statics" on types (e.g. M[_]) that are otherwise unknown due to type erasure (e.g. Daniel's Sequence function example), but StaticMonad is not used for direct invocation of statics, e.g. Option.unit( value ). Thus a necessary improvement is to name object OptionStaticMonad to object Option, which makes trait Option its companion class (or does Scala only allow this if Option is a class?):
- Code:
implicit object Option extends StaticMonad[Option] {
def unit[Y]( y ) = Some( y )
}
Also, to give functionality similar to what we expect for "static" in Java, (some macro or other language to Scala compiler) could automatically generate the statics for each derived class in pseudo-code that did not override them, e.g. as follows although this example seems superfluous, it is not harmful and the generality is needed in other examples.
- Code:
implicit object Some extends StaticMonad[Some] {
def unit[Y]( y ) = Object.unit( y )
}
implicit object None extends StaticMonad[None] {
def unit[Y]( y ) = Object.unit( y )
}
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5334
Expounding on my prior comment, the situations where StaticMonad[M] is employed (versus directly accessing the singleton), type M is unknown, and thus is a more composable abstraction which inverts the control of the access to the singleton statics, and gives that control to the caller:
http://en.wikipedia.org/wiki/Hollywood_Principle
http://lists.motion-twin.com/pipermail/haxe/2011-February/041527.html
Type erasure is an orthogonal issue, that forces the use of an implicit as function parameter, versus the compilation of a separate function body for each possible M in reified languages. Even if Scala was reified, trait StaticMonad is still necessary to abstract the inversion-of-control on singletons. Thus the declaration of implicit instances and parameter is justified by type erasure, but they (along with StaticMonad) could just as well be hidden to make a non-reified language appear to be reified. Which is what I was illustrating with pseudo-code examples.
Note in Copute, Daniel's sequence would be coded:
- Code:
pure sequence<M<X> : Monad<X>, X>( ms : List<M<X>> ) = {
ms.foldRight( M.unit( List<X> ), \m, acc ->
m.bind( \x ->
acc.bind( \tail -> M.unit( tail.append(x) ); );
);
)
}
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5347
Shelby wrote:
A variance annotation on Sub was missing in my prior comment, and should be as follows:
- Code:
trait InjMonad[+Sub[_] <: InjMonad[Sub[_],X], +X] {
def bind[Y, S[Y] >: Sub[Y]] : (X => S[Y]) => S[Y]
}
Without that change, then Some and None would not be subtypes of Option, because Sub was invariant.
Also I am thinking the following is more correct, but I haven't compiled any of my code on this page:
- Code:
trait InjMonad[+Sub[X] <: InjMonad[Sub,X], +X] {
def bind[Y, S[Y] >: Sub[Y]] : (X => S[Y]) => S[Y]
}
I am not sure if that is legal in Scala, but seems to me that is the only way to express that Sub's type parameter is X.
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5349
Shelby wrote:
My prior idea for expressing X to be the Sub's type parameter is not retracted.
However, my suggestion for a covariant annotation on Sub, is erroneous for the reason I had stated in prior comments-- Sub's lifted state may not be invertible (e.g. None has no value of type X) and thus there may be no mapping from a Sub to its supertype. Thus the correction to restore an invariant Sub, and keep the other idea, is:
- Code:
trait InjMonad[Sub[X] <: InjMonad[Sub,X], +X]{
def bind[Y] : (X => Sub[Y]) => Sub[Y]
}
It was incorrect when I wrote that this invariant Sub prevents Some and None subtype of Option. Some#bind and None#bind type signatures contain Option, not Some and None respectively.
I can not think of a useful subtype that would overload bind, but in which case, in my paradigm the subtype could multiply inherit from Monad with distinct Sub. In Daniel's typeclass paradigm this would entail adding another singleton implicit object that inherits from Monad.
Last edited by Shelby on Thu Mar 10, 2011 2:59 am; edited 15 times in total
Page 5 of 11 • 1, 2, 3, 4, 5, 6 ... 9, 10, 11
Page 5 of 11
Permissions in this forum:
You cannot reply to topics in this forum