Computers:
5 posters
Page 5 of 11
Page 5 of 11 • 1, 2, 3, 4, 5, 6 ... 9, 10, 11
Fundamental outline of Copute
http://copute.com/dev/docs/Copute/ref/function.html
Without the powerful static typing and pure function options, Copute is essentially the same grammar as JavaScript.
The Copute language is composed of five fundamentals, type, instance reference, expression, function, and imperative scope.
A Copute source code file is implicitly wrapped in an anonymous function that is called at initialization.
Without the powerful static typing and pure function options, Copute is essentially the same grammar as JavaScript.
The Copute language is composed of five fundamentals, type, instance reference, expression, function, and imperative scope.
- Type is declared by a class statement, enum statement, inseparably in a function (instance construction) expression, or the identifier associated with the aforementioned declarations (when not anonymous).
- Instance construction is declared by a class or enum constructor call expression, a function expression, or literal class expression-- all of which return an instance reference.
- Expression constructs an instance or operates on instance reference(s), and returns an instance reference or void.
- Functional programming is a function call expression, that may optionally nest (a hierarchy of) function call expression. A referentially transparent (aka pure) function is re-entrant, stateless, partial evaluation agnostic, and thus composable (aka reusable).
- Imperative (aka stateful, or state-machine) programming is an ordered sequence of expression(s). Each imperative sequence (in a nested hierarchy of them) is an identifier namespace (aka scope)-- a means to granularize the referential opacity. A statement is a grammatical unit which forms an expression that returns a type of void.
A Copute source code file is implicitly wrapped in an anonymous function that is called at initialization.
Steven Obua just described my Copute project, very substantially
http://lambda-the-ultimate.org/node/4182#comment-64249
I will have a look at Steven Obua's current work.
I will be emailing the following to Steven Obua.
Okay in 20 minutes, I reviewed his new computer language Babel-17, his recent research paper, and the expert criticisms he received:
http://arxiv.org/PS_cache/arxiv/pdf/1007/1007.3023v1.pdf
http://phlegmaticprogrammer.wordpress.com/2010/11/21/response-to-reviews/
http://phlegmaticprogrammer.wordpress.com/2010/11/21/reviews-for-purely-functional-structured-programming/
He is aiming to achieve referential transparency (pure) in an structure language style, that looks a lot like the imperative (stateful) code that is familiar to many programmers, but is actually selectively pure externally, by selectively not allowing references to see the external scope. He accomplishes this by "shadowing", which means hiding an external scope reference by declaring another instance with the same identifier in the local scope. This means some references could see still the external scope if they were not also hidden, thus his design is very granular in that respect (but I think granular in an undesirable way if not coupled with some additional semantics, as I will explain below). JavaScript has this selective hiding capability now, and so does the design of Copute. As the expert reviewers point out, this is not a new concept. The hard part is how to get programmers to use it for purity.
His marketing objective is related to mine, in that he wants to enable the integration of structured imperative programming with pure functional programming, in a more familiar and "less mathematical" (more intuitive or natural) semantics than the Haskell monad (for the average non-mathematical programmer).
However, I think he falls far short of what I am doing with the design of Copute. Afaics, the key problem with his design, is there is no structured and explicit way to define the boundaries of which functions are referentially transparent and which are not. It is the composition of pure functions that enables reusability to scale. This is the same criticism I make against the Haskell monad, where implicit typing and the ad-hoc polymorphism typing system allow any type to cross-pollute another, means that there are no concrete, explicit boundaries on purity (and on semantics and thus type-safety in general). I wrote about that at the following link.
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
It is really the typing system that enables scalable composability (his Babel-17 is not typed, so it is hopeless as the expert reviewers point out), and this is why Copute puts so much effort into getting the purity rules for type variance (inheritance) correct, and type is how we will parallelize our future (note the post at the following link was censored from LtU).
https://goldwetrust.forumotion.com/t112p90-computers#4061
So in summary, he is correct that Scala missed the boat on purity (but Scala creator Odersky has stated that this is because of the challenge of integrating with the rest of Java world), but we need the correct typing system in order to achieve parallelism. I think Steven Obua is starting to realize that, but he is still delegating to inference in the compiler instead of typing, which is incorrect because the programmer has to think in terms of naturally concurrent data structures:
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/
http://lambda-the-ultimate.org/node/4182#comment-64170
http://lambda-the-ultimate.org/node/4182#comment-64227
I will have a look at Steven Obua's current work.
I will be emailing the following to Steven Obua.
Okay in 20 minutes, I reviewed his new computer language Babel-17, his recent research paper, and the expert criticisms he received:
http://arxiv.org/PS_cache/arxiv/pdf/1007/1007.3023v1.pdf
http://phlegmaticprogrammer.wordpress.com/2010/11/21/response-to-reviews/
http://phlegmaticprogrammer.wordpress.com/2010/11/21/reviews-for-purely-functional-structured-programming/
He is aiming to achieve referential transparency (pure) in an structure language style, that looks a lot like the imperative (stateful) code that is familiar to many programmers, but is actually selectively pure externally, by selectively not allowing references to see the external scope. He accomplishes this by "shadowing", which means hiding an external scope reference by declaring another instance with the same identifier in the local scope. This means some references could see still the external scope if they were not also hidden, thus his design is very granular in that respect (but I think granular in an undesirable way if not coupled with some additional semantics, as I will explain below). JavaScript has this selective hiding capability now, and so does the design of Copute. As the expert reviewers point out, this is not a new concept. The hard part is how to get programmers to use it for purity.
His marketing objective is related to mine, in that he wants to enable the integration of structured imperative programming with pure functional programming, in a more familiar and "less mathematical" (more intuitive or natural) semantics than the Haskell monad (for the average non-mathematical programmer).
However, I think he falls far short of what I am doing with the design of Copute. Afaics, the key problem with his design, is there is no structured and explicit way to define the boundaries of which functions are referentially transparent and which are not. It is the composition of pure functions that enables reusability to scale. This is the same criticism I make against the Haskell monad, where implicit typing and the ad-hoc polymorphism typing system allow any type to cross-pollute another, means that there are no concrete, explicit boundaries on purity (and on semantics and thus type-safety in general). I wrote about that at the following link.
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
It is really the typing system that enables scalable composability (his Babel-17 is not typed, so it is hopeless as the expert reviewers point out), and this is why Copute puts so much effort into getting the purity rules for type variance (inheritance) correct, and type is how we will parallelize our future (note the post at the following link was censored from LtU).
https://goldwetrust.forumotion.com/t112p90-computers#4061
So in summary, he is correct that Scala missed the boat on purity (but Scala creator Odersky has stated that this is because of the challenge of integrating with the rest of Java world), but we need the correct typing system in order to achieve parallelism. I think Steven Obua is starting to realize that, but he is still delegating to inference in the compiler instead of typing, which is incorrect because the programmer has to think in terms of naturally concurrent data structures:
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/
http://lambda-the-ultimate.org/node/4182#comment-64170
http://lambda-the-ultimate.org/node/4182#comment-64227
re: Steven Obua just described my Copute project, very substantially
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-92
Hi Steven, thanks for the clarification. I agree, a public discussion is preferred. I didn't want to create Copute (and I don't even have a working compiler yet), but I need a better mainstream language, and I got tired of begging others to do it and waiting. I don't even think I am the most qualified to do it (I'm historically an applications programmer learning to become a language research + designer since 2009), so here I am. So it is worthwhile if there is anything we can learn from each other, and share publicly. In short, I appreciate the discussion, because I don't want to make a design error or waste my effort "barking up the wrong tree".
If I understand correctly, per your clarification Babel-17 is making all functions pure, but I asserted my understanding in my prior post that within pure functions, the structured code is granularly opaque by employing per data instance shadowing. That does not make the containing function impure, so that is fine (and Copute and JavaScript can do that too, but JavaScript can't assert that the function is pure and closures are even opaque). One proposed difference for Copute, is the function is only a referentially transparent boundary if it is declared to enforced as pure.
If I understand correctly that we share the goal of facilitating integration/interoption of (and transition between) imperative and pure functional programming, then why would we not need both impure and pure functions?
My understanding is that programs are more than just functions, they are compositions of rich semantic paradigms which can be declared with typing. And life is not entirely entirely referentially transparent. For example, the Observer pattern requires a callback (external state), thus it can never be a referentially transparent construction. However, in an idealized world, we can invert the Observer pattern as Functional Reactive (FPR):
http://www.mail-archive.com/haskell-cafe@haskell.org/msg66898.html
http://www.haskell.org/haskellwiki/Phooey
FPR is not theoretically less efficient, because it could be optimized to only recompute the portions of the global FPR chain that have changed, thus it isn't different from the propagation of state dependencies in the Observer pattern in that respect. In both Observer and FPR, the order of propagation of state change can be coded indeterminate or deterministically. However, in some respects Observer pattern is easier, because one can just slap on any where, without too much concern for overall design (however, this sloppiness will manifest as race conditions and other out-of-order situations, etc).
So I will not argue theoretically that it is impossible to make every possible program a composition of referentially transparent functions (that where we want to be); however, in practice the transition from where we are today in the computer world to the future, will be eased if one can use a referentially opaque function sometimes. Sometimes "quick and dirty" is what gets the job done and often what gets the job done, is what is popular. So my idea with Copute was to make the transition to pure functional programming as familiar and painless as possible...and in my case to users of afaik the most popular computer language in the world, JavaScript. Also because then I will have a ready market since afaik there is no good pure FP compiler with great typing system, and optional integrated dynamic typing, that outputs JavaScript? (I started by studying HaXe strengths and weaknesses, then I learned Haskell, etc).
Back to the more fundamental theory point. Although we want to have programs which are composed of entirely referentially transparent (i.e. pure) functions, we encounter gridlock where we need to refactor up a tree of pure FP code, if some function on a branch wasn't granular enough (i.e. conflated some data which really should be orthogonal). So eventually our ideal world of pure FP every where becomes untenable-- in short, it can't scale in a wide area composition. It is sort of analogous to C++ "const" blunder, which had to propagate every where in order to be used any where.
Thus we are likely to use pure FP for problem spaces that are well encapsulated, but we will continue to use imperative coding for more dynamic social integration. Thus it seemed to, critical that our typing system will make these boundaries transparent (explicit, not silent inferred polymorphism).
Do you have any comments that could spur another round of exchange? Or does this just seem wrong or irrelevant from your perspective? I am eager to learn from any one who is willing to share. Thanks.
===========
ADD: Coase's theorem applies, i.e. that there is no external reference point, thus all boundaries will fail (be subverted by the free market of the fact that the universe is trending to maximum disorder). Thus the all pure FP or nothing boundary at the function, is not realistic. It is not in harmony with thermodynamics and entropy.
Hi Steven, thanks for the clarification. I agree, a public discussion is preferred. I didn't want to create Copute (and I don't even have a working compiler yet), but I need a better mainstream language, and I got tired of begging others to do it and waiting. I don't even think I am the most qualified to do it (I'm historically an applications programmer learning to become a language research + designer since 2009), so here I am. So it is worthwhile if there is anything we can learn from each other, and share publicly. In short, I appreciate the discussion, because I don't want to make a design error or waste my effort "barking up the wrong tree".
If I understand correctly, per your clarification Babel-17 is making all functions pure, but I asserted my understanding in my prior post that within pure functions, the structured code is granularly opaque by employing per data instance shadowing. That does not make the containing function impure, so that is fine (and Copute and JavaScript can do that too, but JavaScript can't assert that the function is pure and closures are even opaque). One proposed difference for Copute, is the function is only a referentially transparent boundary if it is declared to enforced as pure.
If I understand correctly that we share the goal of facilitating integration/interoption of (and transition between) imperative and pure functional programming, then why would we not need both impure and pure functions?
My understanding is that programs are more than just functions, they are compositions of rich semantic paradigms which can be declared with typing. And life is not entirely entirely referentially transparent. For example, the Observer pattern requires a callback (external state), thus it can never be a referentially transparent construction. However, in an idealized world, we can invert the Observer pattern as Functional Reactive (FPR):
http://www.mail-archive.com/haskell-cafe@haskell.org/msg66898.html
http://www.haskell.org/haskellwiki/Phooey
FPR is not theoretically less efficient, because it could be optimized to only recompute the portions of the global FPR chain that have changed, thus it isn't different from the propagation of state dependencies in the Observer pattern in that respect. In both Observer and FPR, the order of propagation of state change can be coded indeterminate or deterministically. However, in some respects Observer pattern is easier, because one can just slap on any where, without too much concern for overall design (however, this sloppiness will manifest as race conditions and other out-of-order situations, etc).
So I will not argue theoretically that it is impossible to make every possible program a composition of referentially transparent functions (that where we want to be); however, in practice the transition from where we are today in the computer world to the future, will be eased if one can use a referentially opaque function sometimes. Sometimes "quick and dirty" is what gets the job done and often what gets the job done, is what is popular. So my idea with Copute was to make the transition to pure functional programming as familiar and painless as possible...and in my case to users of afaik the most popular computer language in the world, JavaScript. Also because then I will have a ready market since afaik there is no good pure FP compiler with great typing system, and optional integrated dynamic typing, that outputs JavaScript? (I started by studying HaXe strengths and weaknesses, then I learned Haskell, etc).
Back to the more fundamental theory point. Although we want to have programs which are composed of entirely referentially transparent (i.e. pure) functions, we encounter gridlock where we need to refactor up a tree of pure FP code, if some function on a branch wasn't granular enough (i.e. conflated some data which really should be orthogonal). So eventually our ideal world of pure FP every where becomes untenable-- in short, it can't scale in a wide area composition. It is sort of analogous to C++ "const" blunder, which had to propagate every where in order to be used any where.
Thus we are likely to use pure FP for problem spaces that are well encapsulated, but we will continue to use imperative coding for more dynamic social integration. Thus it seemed to, critical that our typing system will make these boundaries transparent (explicit, not silent inferred polymorphism).
Do you have any comments that could spur another round of exchange? Or does this just seem wrong or irrelevant from your perspective? I am eager to learn from any one who is willing to share. Thanks.
===========
ADD: Coase's theorem applies, i.e. that there is no external reference point, thus all boundaries will fail (be subverted by the free market of the fact that the universe is trending to maximum disorder). Thus the all pure FP or nothing boundary at the function, is not realistic. It is not in harmony with thermodynamics and entropy.
Copute as a startup...
Yesterday was an interesting day, because I was excited to be getting some exchanges with people who do the kind of work I do, and thus can challenge, inspire, interact with me on that intellectual level.
The emotional reaction was I believe to want to as quickly as possible find a way to get some such people to work together with me, because it would be immensely fun, exciting, and productive. I think in large part, that is why these guys work in the Silicon Valley-- for the entire social aspect of the synergy.
So right there that probably kills any chance of others working with me at this juncture, given I am in the Philippines and have no desire to go work in the Silicon Valley (or any other tech center such as San Antonio, etc).
But as I got to thinking more about the economics of what I am doing, I realized that I probably shouldn't be paying any one a dime. The reason is because what will make this fly is it being open-source, which means people contribute because they know they own it, in that they can use the sum of the work any way they wish to, now and into the future.
So the only way to get an open-source project rolling with contributions, is to first deliver an initial product which is useful enough, that people start contributing because they need some aspect of what is already there, combined with something else they need.
So it is all about need. That is key.
Also I don't think you will get anyone to contribute to open-source if they think you are going to charge for access. So I think it is very key to make it clear that the model for the language is no charge for access. It has to be stressed that I own the Copute.com domain, and may provide an optional way for developers to monetize their efforts, but that it is entirely optional marketing side-show, and the Copute itself is open-source, public domain, and not owned by any one. No strings attached.
So I think the correct time to bring in investors, is when we go to launch the Compute.com monetization engine, which has to come after the Copute language is done and already generating significant contribution.
So this means, I am on my own for the time-being. If anyone joins to help me at this stage, it will be a gift from God, because it would take someone LIKE MYSELF who is utterly convinced of the importance of Copute and wanting to dedicate themselves to it, without any certainty of financial gain.
I don't think I am likely to find another person like myself. I got a little bit inspired to read Joseph Perla's blog and realize there is a bright young man who shares some of my philosophy (but not all). But there is still a big gap between that, and being LIKE ME, with respect to Copute.
Of course we know what happened the last time I got inspired about a young programmer, Nicolas Cannesse, because I was admiring his work on HaXe, but it turned very bitter when he shot down every idea I had about improving HaXe and banned me from his discussion group mailing list. However, Copute is in large part influenced by HaXe, so my tribute to Nicolas is implicit. So it is not bitter after all. I even wrote in private to others, that I don't have to be frustrated with Nicolas, I wish him the very best.
The emotional reaction was I believe to want to as quickly as possible find a way to get some such people to work together with me, because it would be immensely fun, exciting, and productive. I think in large part, that is why these guys work in the Silicon Valley-- for the entire social aspect of the synergy.
So right there that probably kills any chance of others working with me at this juncture, given I am in the Philippines and have no desire to go work in the Silicon Valley (or any other tech center such as San Antonio, etc).
But as I got to thinking more about the economics of what I am doing, I realized that I probably shouldn't be paying any one a dime. The reason is because what will make this fly is it being open-source, which means people contribute because they know they own it, in that they can use the sum of the work any way they wish to, now and into the future.
So the only way to get an open-source project rolling with contributions, is to first deliver an initial product which is useful enough, that people start contributing because they need some aspect of what is already there, combined with something else they need.
So it is all about need. That is key.
Also I don't think you will get anyone to contribute to open-source if they think you are going to charge for access. So I think it is very key to make it clear that the model for the language is no charge for access. It has to be stressed that I own the Copute.com domain, and may provide an optional way for developers to monetize their efforts, but that it is entirely optional marketing side-show, and the Copute itself is open-source, public domain, and not owned by any one. No strings attached.
So I think the correct time to bring in investors, is when we go to launch the Compute.com monetization engine, which has to come after the Copute language is done and already generating significant contribution.
So this means, I am on my own for the time-being. If anyone joins to help me at this stage, it will be a gift from God, because it would take someone LIKE MYSELF who is utterly convinced of the importance of Copute and wanting to dedicate themselves to it, without any certainty of financial gain.
I don't think I am likely to find another person like myself. I got a little bit inspired to read Joseph Perla's blog and realize there is a bright young man who shares some of my philosophy (but not all). But there is still a big gap between that, and being LIKE ME, with respect to Copute.
Of course we know what happened the last time I got inspired about a young programmer, Nicolas Cannesse, because I was admiring his work on HaXe, but it turned very bitter when he shot down every idea I had about improving HaXe and banned me from his discussion group mailing list. However, Copute is in large part influenced by HaXe, so my tribute to Nicolas is implicit. So it is not bitter after all. I even wrote in private to others, that I don't have to be frustrated with Nicolas, I wish him the very best.
Realized Haskell vs. Compute are fundamentally equivalent in power, except for...
Copute has one key fundamental advantage:
http://code.google.com/p/copute/issues/detail?id=39#c2
Plus, Copute has a more intuitive syntax for imperative programmers (the bulk of programmers):
http://code.google.com/p/copute/issues/detail?id=39#c3
http://code.google.com/p/copute/issues/detail?id=39#c2
Plus, Copute has a more intuitive syntax for imperative programmers (the bulk of programmers):
http://code.google.com/p/copute/issues/detail?id=39#c3
Why Copute won't support catching exceptions
re: Steven Obua just described my Copute project, very substantially
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-99
Agreed Go is aimed towards systems programming and isn't an optimum solution for the use case I am driving towards either. And moreover, I think the future of systems programming is to have apps that are provably correct (Linus Torvald admitted such a remote possibility for "designer languages"), so I think Go is also going to be superceded in time, but that might be a long time from now. Good to see Thompson is working on an upgrade for C.
The problem is that an exception causes order dependence, which removes the ability to parallelism the implementation.
Although afaik an exception does not violate referential transparency literally, it does remove the orthogonality of functions, which for me is one of the key outcomes of referential transparency, and orthogonality is necessary for composability. Whereas, typing is the lego patterns for composability. (btw the name Co-pute is driving towards cooperation and composition, I am aiming for wide scale web mashup language)
If the programmer expects the possibility of an exception, then the function needs to declare that in its types. Afaics, there is no shortcut of exceptions without type that maintains composability and concurrency. The programmer can make a type NeverZero, if prefers to attack the problem on input, sort of analogous to reversing the Observer pattern to Functional Programming Reactive as I mentioned in my 2nd post above.
Yeah it is fugly to have to propagate exception cases every where. But that is life. The shortcut has a real important cost. And I agreed with the comments at Go, that exceptions turn into a convoluted mess, especially when one starts composing functions in different permutations.
==============
ADD: There a function A which inputs a function B, and A catches an expected exception, but note this is not declared in the type of A or B. So function B is input to A, but unlike other B in the past, this B catches the exception that A is expecting to catch. The programmer of B would have no way of knowing that A expected the same exception, because that is not declared in the types. Orthogonality and composability subverted. Whereas if B declared by returning a non-exception type, that it handles the exception, then problem resolved. So then A would be overloaded (folded) on return type of B, one version/guard of A that handles the exception and one that lets B handle it.
I can see how you get the elegant determinism by basically adding a "null" test on every return type of every function, instead of doing a longjmp, but afaics hiding the exception return type from the programmer causes the above problem.
I see no problem using exceptions when there is no function call inside the try block.
If I have made a wrong assumption or erroneous statement, I apologize in advance.
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-99
Agreed Go is aimed towards systems programming and isn't an optimum solution for the use case I am driving towards either. And moreover, I think the future of systems programming is to have apps that are provably correct (Linus Torvald admitted such a remote possibility for "designer languages"), so I think Go is also going to be superceded in time, but that might be a long time from now. Good to see Thompson is working on an upgrade for C.
The problem is that an exception causes order dependence, which removes the ability to parallelism the implementation.
- Code:
raise First + raise Second handle First => 1 | Second => 2,
what is the value of this expression? It clearly depends on the order
of evaluation.
Although afaik an exception does not violate referential transparency literally, it does remove the orthogonality of functions, which for me is one of the key outcomes of referential transparency, and orthogonality is necessary for composability. Whereas, typing is the lego patterns for composability. (btw the name Co-pute is driving towards cooperation and composition, I am aiming for wide scale web mashup language)
If the programmer expects the possibility of an exception, then the function needs to declare that in its types. Afaics, there is no shortcut of exceptions without type that maintains composability and concurrency. The programmer can make a type NeverZero, if prefers to attack the problem on input, sort of analogous to reversing the Observer pattern to Functional Programming Reactive as I mentioned in my 2nd post above.
Yeah it is fugly to have to propagate exception cases every where. But that is life. The shortcut has a real important cost. And I agreed with the comments at Go, that exceptions turn into a convoluted mess, especially when one starts composing functions in different permutations.
==============
ADD: There a function A which inputs a function B, and A catches an expected exception, but note this is not declared in the type of A or B. So function B is input to A, but unlike other B in the past, this B catches the exception that A is expecting to catch. The programmer of B would have no way of knowing that A expected the same exception, because that is not declared in the types. Orthogonality and composability subverted. Whereas if B declared by returning a non-exception type, that it handles the exception, then problem resolved. So then A would be overloaded (folded) on return type of B, one version/guard of A that handles the exception and one that lets B handle it.
I can see how you get the elegant determinism by basically adding a "null" test on every return type of every function, instead of doing a longjmp, but afaics hiding the exception return type from the programmer causes the above problem.
I see no problem using exceptions when there is no function call inside the try block.
If I have made a wrong assumption or erroneous statement, I apologize in advance.
More on composability and exceptions
Still "talking shop" with the Steven Obua.
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-103
I was making two related points, one that concurrency is not achieved unless we use a Maybe type. Afaics, you've since clarified for me (thank you), that you are using a Maybe type, and you've hidden (made implicit) the monadic action in a try-catch abstraction.
Agreed, exceptions can be concurrent if they don't employ the longjmp paradigm, and instead (even implicitly) employ the Exception monad on the Maybe algebraic type, where the compiler is doing the monadic lifting behind the scenes.
So our discussion is the choice between do that implicitly and doing it with static typing. The advantage of doing it with static typing is that it can be propagated automatically with a monad type (what afaics Babel-17 achieves), and we gain the ability to overload on type (how Haskell and Copute do it).
The second point I was making is that static typing is critical for composability.
With dynamic typing, the only way to prove correctness is with assertions on inputs (i.e. exceptions). These assertions are just types[1], e.g. instead throwing an exception to insure NonZero, just make the input type a NonZero type.
How do we compose functions when their invariants are not explicitly stated by type, but rather hidden in their implementation as assertions that will throw exceptions? We end up with spaghetti, because the composition of the invariants are not being checked by the compiler. Non-declared assumptions get lost. I have 25+ years coding in spaghetti. Is there another way to deal with it that I am not aware of?
For composability, afaics the exceptions must be coded on the return type (post-conditions[1]), and/or on the input types (pre-conditions[1]).
[1] http://lambda-the-ultimate.org/node/1551#comment-64186
=======================================
=======================================
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-105
Agreed I definitely want to try to avoid subjective injection (because it won't help either of us produce a better product). So we are not going to fight that irrational war, because we will delineate what we can conclude objectively. You will correct me and point out where I am making a subjective conclusion.
Agreed, that the tension in composability hinges around the granularity (more completely, the fitness) with which the invariants can be declared and enforced/checked. Agreed also that to the degree that one's statically typed implementation does not fully express the invariant semantics, then aliasing error will spill out chaotically (aliasing error manifests as noise).
Isn't the objective fallacy of arguing against static typing, that the alternative isn't better? Afaics, testing is never exhaustive due to the Halting problem (beyond getting some common use cases covered, which is not an argument against static typing because as you also said, it is needed in any case), because one would need to test every possible composition at all permutations of N potential mashup companion functions before we even know what they will be. Note, there were some comments at LtU yesterday about impracticality and inadequacy of testing for proving associativity for Guy steele's example. Documentation is not an argument against static typing, as it is needed in any case.
Static typing is a first level check, it enables the compiler to check some errors. And to the degree one strives to produce types that fully express the invariant pre and post-conditions at all semantic levels, then the degree of checking is increased (but sadly aliasing error isn't a linear phenomenon, so that might not help). This is not security against all possible semantic errors, but at least remaining errors are in those that slipped through the design of the types (even though they manifest as aliasing error far from the source). Types can be reused, so we can put a lot of effort into designing them well.
There is a tradeoff. As the types become more restrictive, they become more difficult to compose. The C "const" blunder being one of the infamous painful examples (I've been hopefully careful to not repeat this "const" in Copute). This is real life injecting itself in our attempts for a Holy Grail, which there never will be of course. Actually it is a futures contract which is the antithesis of natural law. "const" could never be assigned to a non-const in any scenario (there was no escape route), it thus infected the entire program.
Thanks for pointing out the exception is not a result type and thus a design error. I agree that declaring exceptions as return types is a design error (an ad-hoc hack), because exception is not a proper post-condition, i.e. it is a non-result semantic and thus a design error to return it as a result. I don't see objectively how an implicitly lifted exception monadic action try-catch is not also a design error by same logic? Divide-by-zero means our result is NFG (no fugling good, lol ), which is not a result at all, it is different semantic entirely, so normally we are designing our code so that exception will never occur. So I offer a NonZero argument type, the caller can construct that type and check at run-time that not passing a 0 value. The dynamic checks are there in static typing, but they are forced to be checked (note constructor NonZero( 0 ) would throw an assumed uncaught exception, aka an assert, i.e. stack trace into the debugger because caller didn't even do the check). If the caller already had a NonZero type, they don't need to check it again. Afaics, that is more PROVABLY correct than returning an exception, because it describes the compiler checked invariants, rather than an ad-hoc return which is not a return semantic. So I am not arguing for exception return type except where you argue for try-catch as an ad-hoc "solution" (perhaps that wasn't reified in my prior post), but rather for declaring the invariant arguments and avoiding exceptions entirely, where practical (i.e. correct design).
After all, we are not in religious war, because Copute supports dynamic typing too. I understand in some (maybe even most) use cases, static typing does not provide reasonable benefits to justify its use. It can cause tsuris for very minute gains in checking. Afaik, inferred typing goes a long way to increase the utility. And potentially Map reduce constructed data types will make typing much more useful, which pertains to the title of this blog page (link is to my comments which were censored from LtU). I am not criticizing Babel-17 for not having static typing (and am encouraging you to pursue your design ideas), I only asked that we characterize any tradeoffs of potentially adding it later incrementally, never, or now (as I am trying to do in one big difficult design+implementation step). And afaics, this discussion has helped document/reify/correct some of my own understanding. I hope we have also clarified for your readers some of your design decisions for Babel-17. What else can I say, but a big sincere thank you.
Are there any more objective observations we can make on this issue? Any corrections?
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-103
I was making two related points, one that concurrency is not achieved unless we use a Maybe type. Afaics, you've since clarified for me (thank you), that you are using a Maybe type, and you've hidden (made implicit) the monadic action in a try-catch abstraction.
Agreed, exceptions can be concurrent if they don't employ the longjmp paradigm, and instead (even implicitly) employ the Exception monad on the Maybe algebraic type, where the compiler is doing the monadic lifting behind the scenes.
So our discussion is the choice between do that implicitly and doing it with static typing. The advantage of doing it with static typing is that it can be propagated automatically with a monad type (what afaics Babel-17 achieves), and we gain the ability to overload on type (how Haskell and Copute do it).
The second point I was making is that static typing is critical for composability.
With dynamic typing, the only way to prove correctness is with assertions on inputs (i.e. exceptions). These assertions are just types[1], e.g. instead throwing an exception to insure NonZero, just make the input type a NonZero type.
How do we compose functions when their invariants are not explicitly stated by type, but rather hidden in their implementation as assertions that will throw exceptions? We end up with spaghetti, because the composition of the invariants are not being checked by the compiler. Non-declared assumptions get lost. I have 25+ years coding in spaghetti. Is there another way to deal with it that I am not aware of?
For composability, afaics the exceptions must be coded on the return type (post-conditions[1]), and/or on the input types (pre-conditions[1]).
[1] http://lambda-the-ultimate.org/node/1551#comment-64186
=======================================
=======================================
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-105
Agreed I definitely want to try to avoid subjective injection (because it won't help either of us produce a better product). So we are not going to fight that irrational war, because we will delineate what we can conclude objectively. You will correct me and point out where I am making a subjective conclusion.
Agreed, that the tension in composability hinges around the granularity (more completely, the fitness) with which the invariants can be declared and enforced/checked. Agreed also that to the degree that one's statically typed implementation does not fully express the invariant semantics, then aliasing error will spill out chaotically (aliasing error manifests as noise).
Isn't the objective fallacy of arguing against static typing, that the alternative isn't better? Afaics, testing is never exhaustive due to the Halting problem (beyond getting some common use cases covered, which is not an argument against static typing because as you also said, it is needed in any case), because one would need to test every possible composition at all permutations of N potential mashup companion functions before we even know what they will be. Note, there were some comments at LtU yesterday about impracticality and inadequacy of testing for proving associativity for Guy steele's example. Documentation is not an argument against static typing, as it is needed in any case.
Static typing is a first level check, it enables the compiler to check some errors. And to the degree one strives to produce types that fully express the invariant pre and post-conditions at all semantic levels, then the degree of checking is increased (but sadly aliasing error isn't a linear phenomenon, so that might not help). This is not security against all possible semantic errors, but at least remaining errors are in those that slipped through the design of the types (even though they manifest as aliasing error far from the source). Types can be reused, so we can put a lot of effort into designing them well.
There is a tradeoff. As the types become more restrictive, they become more difficult to compose. The C "const" blunder being one of the infamous painful examples (I've been hopefully careful to not repeat this "const" in Copute). This is real life injecting itself in our attempts for a Holy Grail, which there never will be of course. Actually it is a futures contract which is the antithesis of natural law. "const" could never be assigned to a non-const in any scenario (there was no escape route), it thus infected the entire program.
Thanks for pointing out the exception is not a result type and thus a design error. I agree that declaring exceptions as return types is a design error (an ad-hoc hack), because exception is not a proper post-condition, i.e. it is a non-result semantic and thus a design error to return it as a result. I don't see objectively how an implicitly lifted exception monadic action try-catch is not also a design error by same logic? Divide-by-zero means our result is NFG (no fugling good, lol ), which is not a result at all, it is different semantic entirely, so normally we are designing our code so that exception will never occur. So I offer a NonZero argument type, the caller can construct that type and check at run-time that not passing a 0 value. The dynamic checks are there in static typing, but they are forced to be checked (note constructor NonZero( 0 ) would throw an assumed uncaught exception, aka an assert, i.e. stack trace into the debugger because caller didn't even do the check). If the caller already had a NonZero type, they don't need to check it again. Afaics, that is more PROVABLY correct than returning an exception, because it describes the compiler checked invariants, rather than an ad-hoc return which is not a return semantic. So I am not arguing for exception return type except where you argue for try-catch as an ad-hoc "solution" (perhaps that wasn't reified in my prior post), but rather for declaring the invariant arguments and avoiding exceptions entirely, where practical (i.e. correct design).
After all, we are not in religious war, because Copute supports dynamic typing too. I understand in some (maybe even most) use cases, static typing does not provide reasonable benefits to justify its use. It can cause tsuris for very minute gains in checking. Afaik, inferred typing goes a long way to increase the utility. And potentially Map reduce constructed data types will make typing much more useful, which pertains to the title of this blog page (link is to my comments which were censored from LtU). I am not criticizing Babel-17 for not having static typing (and am encouraging you to pursue your design ideas), I only asked that we characterize any tradeoffs of potentially adding it later incrementally, never, or now (as I am trying to do in one big difficult design+implementation step). And afaics, this discussion has helped document/reify/correct some of my own understanding. I hope we have also clarified for your readers some of your design decisions for Babel-17. What else can I say, but a big sincere thank you.
Are there any more objective observations we can make on this issue? Any corrections?
Shocking Java comparison
I was shocked how many things Java can not do (well), that Copute proposes to do:
http://copute.com/dev/docs/Copute/ref/intro.html#Java
This list became more exhaustive after researching how I might compile Copute code to Java source code-- it actually can probably be done, but the Java code will be ugly and bloated, almost impossible to follow the semantics.
http://copute.com/dev/docs/Copute/ref/intro.html#Java
This list became more exhaustive after researching how I might compile Copute code to Java source code-- it actually can probably be done, but the Java code will be ugly and bloated, almost impossible to follow the semantics.
Eliminated exceptions from computer language!
Wow! I was able to eliminate the assertion exceptions entirely:
http://code.google.com/p/copute/issues/detail?id=42
Remember the prior discussion with the creator of Babel-17, well afaics, his point is entirely void now
I think this is a major breakthrough for computing. Absolutely, fundamentally important.
http://code.google.com/p/copute/issues/detail?id=42
Remember the prior discussion with the creator of Babel-17, well afaics, his point is entirely void now
I think this is a major breakthrough for computing. Absolutely, fundamentally important.
Attempt to shutdown the internet, failed.
As I had predicted, because it only takes a trickle of truth to route around captive markets (Coase's Theorem again, i.e. 2nd law of thermo, the low entropy state will always seep away into maximum disorder, i.e. possibilities)
http://edition.cnn.com/2011/TECH/web/02/03/internet.shut.down/index.html?hpt=Sbin
Use your WiFi router as a wireless multi-hop network hub:
http://www.zerohedge.com/article/how-maintain-internet-access-even-if-your-government-turns-it
SRSrocco, technology is imploding due to "complexity" as you had predicted? No. That is because you did not understand that increasing possibilities is not complexity, it is actually less complex, because there are fewer binding futures contracts and more freedom for the market to anneal to dynamic situations.
http://edition.cnn.com/2011/TECH/web/02/03/internet.shut.down/index.html?hpt=Sbin
"If you really wanted to turn off the global internet, you'd have to seek out people on every continent and every country," said Cowie from Renesys. "The internet is so decentralized that there is no kill switch."
"No you can't do that," said Harvard's Faris. "The internet is designed to be robust. Certain links break and then other links are opened."
In Egypt, for example, people who couldn't access the broadband internet were able to place international phone calls to Europe to log on to dial-up internet service, he said, which, of course, operates on phone lines.
Google even announced a service that would let people in Egypt use landline telephones to post to Twitter using voice messages.
"Communication continues and people revert to other modes,"
Use your WiFi router as a wireless multi-hop network hub:
http://www.zerohedge.com/article/how-maintain-internet-access-even-if-your-government-turns-it
SRSrocco, technology is imploding due to "complexity" as you had predicted? No. That is because you did not understand that increasing possibilities is not complexity, it is actually less complex, because there are fewer binding futures contracts and more freedom for the market to anneal to dynamic situations.
Facebook currency
http://www.marketoracle.co.uk/Article25982.html#comment100122
Shelby wrote:The currency has to be created. If it does not represent an exchange between tangible money, then it means who ever created it, is like a central bank, stealing that value and causing inflation as it spends it into the the ecosystem.
So in essence what Facebook has done is admit that their business model is an economic failure:
http://www.jperla.com/blog/post/facebook-is-a-ponzi-scheme
Now they will take down all the game developers with them, because their system could not sustain economic viability with a free market of game developers charging how they wished.
One thing is a virtual currency is a way for people in the developing world to earn income on Facebook and avoid the issues of money transfer and transactional costs. But the problem is that if Facebook doesn't back this currency with gold or silver, this it will just be stealing value via inflation and ripping off the entire ecosystem, eventually bringing the whole thing down.
The genesis of Facebook's demise will be decentralized social network. It is coming. The Napster or Gnutella version of Facebook is coming. I should know. Hint.
Scala 2500% growth rate of job listings at indeed.com
I can't until a year or 1.5 years from now and we do the search on indeed.com for both sets of jobs advertised
You entirely missed the point of why I posted about Scala's 2500% (and accelerating!) growth rate.
The point is that it shows that there is a potentially huge demand for a Copute-like breakthrough in language design.
One of the first things a marketer has to determine is if there is a high-growth unmet need or market niche:
http://www.scala-lang.org/node/3272
http://stackoverflow.com/questions/1104274/scala-as-the-new-java
http://stackoverflow.com/questions/1108833/should-i-study-scala
Of course, it is possible that once Scala saturates the hard-core early adopters, then its growth will falter, because Scala is very hard to learn ("kitchen-sink" of features, with unfamiliar syntax):
http://www.google.com/search?q=scala+criticism
http://creativekarma.com/ee.php/weblog/comments/my_verdict_on_the_scala_language/
http://creativekarma.com/ee.php/weblog/comments/static_typing_and_scala/
http://stackoverflow.com/questions/3112725/advantages-of-scalas-type-system/3113741#3113741
http://stackoverflow.com/questions/1025181/hidden-features-of-scala
Imo, Scala suffers from trying to be too general and thus allowing too many ways to do the same thing, and thus requires the programmer to know all of it, in order to read the code of others (i.e. no single-point-of-truth simplicity). I am consciously trying to limit Copute's syntax and keep it familiar (C-like), to only paradigms that provide the necessary generality and unifying orthogonal paradigms.
If you want wrap your head around how knowledgeable I am in this field, take for example this page (and I have years and millions of lines of code experience in assembler, C, C++, PHP, etc too):
http://stackoverflow.com/questions/61088/hidden-features-of-javascript
I think you'd be hard pressed to expose an average C# programmer to all of it in one place if not for SO. It'd take years of playing with it to come up with the same hard won list. – Allain Lalonde Sep 14 '08 at 18:54
7
I've been writing JavaScript professionally for 10 years now and I learned a thing or three from this thread. Thanks, Alan! – Andrew Hedges Sep 20 '08 at 7:39
Well I knew everything on that page, and even I can correct some mistakes (I knew this years ago):
@Vincent Robert: please note that arguments.callee is being deprecated. – ken Dec 29 '10 at 21:50
Wrong, it is arguments.caller that is being deprecated.
https://developer.mozilla.org/en/JavaScript/Reference/Functions_and_function_scope/arguments/caller
https://developer.mozilla.org/en/JavaScript/Reference/Functions_and_function_scope/arguments/callee
A few comments on Scala:
http://creativekarma.com/ee.php/weblog/comments/my_verdict_on_the_scala_language/
The complexity of functional programming is perhaps a bit easier to explain. It’s certainly possible to write nearly conventional code in Scala. Here, for example, is Scala code for a conventional way to sum the elements of a list of integers:
- Code:
def sum(l: List[int]): int = {
var result: int = 0
for (item <- l)
result += item
return result
}
But that’s not “the Scala way”. A good Scala programmer is expected to use this instead:
- Code:
def sum(l: List[int]): int = (0/:l){_+_}
Isn’t that lovely? This code (that appears to be line noise) is probably the most classic example of a catamorphism (a data transform that results in less data out than in). Basically it says, initialize the result to 0, then go from beginning to end through l, and for each item compute the new result to be result+item. Oh wait, isn’t that exactly what our earlier code did? Well, um, yes, but… this way doesn’t use any of that evil nasty mutable state. This code might be darned near illegible to normal people, but it is the pinnacle of purity and virtue in the FP world.
Oh… if you’re one of those people who worries about efficiency, you do not want to know what that tiny little bit of code expands to.
Let me say that I like immutable data and immutable data structures. I wrote an article here about immutable data in Java. One of the things that attracted me to Scala was its full support for immutable data and immutable data structures. But dagnabbit, mutable variables within a method (or function) aren’t a crime and won’t hurt anything when they’re allocated on the stack as is done by the JVM. This mutable vs. immutable fight has been going on since Turing and Church. Pretty much 100% of all actual, working, useful programs have been written with mutable state. So FP programmers, just get over your aversion to it, okay?
Although I agreed with much of what that author wrote, and I also agree with him above that Scala FP syntax is cryptic and there are too many ways to write the same thing in ever more cryptic ways (e.g. the use of '_' + 1 instead of (x) => x + 1 for lambda/anonymous function), I still must point out that the author is wrong about the importance of using FP within functions, because although this doesn't impact the immutability of the containing function, it does impact whether the inner code is concurrency agnostic and can be parallelized on N cores.
Copute won't even allow "for" loops (while loops are available for diehards), but it will have a much more sane syntax for FP expression.
And finally I end this post with some humor from the comment section of that prior link:
- Code:
(0/:l){_+_} !!!!
I’m not one to disparage a languages syntax for being too heavy on the symbols, but do you guy realize that looks like a smily version of goatse guy wearing a hat?
Last edited by Shelby on Mon Feb 07, 2011 1:46 am; edited 5 times in total
Real risk I have on Copute is technical, not marketing risk
This is why I am spending so much effort on the design stage now.
Because even if no person ever used Copute, it would be worth building it just for my use, because I plan to write another million lines of code in my lifetime, and Copute can make it order-of-magnitude more productive. The existing languages are bad enough, that it is actually worth a year of time to fix them even if I become the only user of Copute. However, realistically it will take more than a year, especially to get a good IDE and debugger done (so this is causing me much contemplation).
And if I do write a million lines of code using Copute, that nearly insures that it will become popular, because others will start using my libraries of code and they will be using Copute when they do.
So the main challenge is whether Copute is technically correct (a true order-of-magnitude gain, or a fallacy?), and timing to get there before someone else makes Copute unnecessary.
Because even if no person ever used Copute, it would be worth building it just for my use, because I plan to write another million lines of code in my lifetime, and Copute can make it order-of-magnitude more productive. The existing languages are bad enough, that it is actually worth a year of time to fix them even if I become the only user of Copute. However, realistically it will take more than a year, especially to get a good IDE and debugger done (so this is causing me much contemplation).
And if I do write a million lines of code using Copute, that nearly insures that it will become popular, because others will start using my libraries of code and they will be using Copute when they do.
So the main challenge is whether Copute is technically correct (a true order-of-magnitude gain, or a fallacy?), and timing to get there before someone else makes Copute unnecessary.
Every 10 years we need a new programming language paradigm
http://creativekarma.com/ee.php/weblog/about/
About now, it time for the one that follows Java (the virtual machine, garbage collection, no pointers, everything is an object) paradigm.
In 1975 I started using “structured programming” techniques in assembly language, and became a true believer.
In 1983 a new era dawned for me as I started doing some C programming on Unix and MS-DOS. For the next five years, I would be programming mixed C/assembly systems running on a variety of platforms including microcoded bit-slice graphics processors, PCs, 68K systems, and mainframes. For the five years after that, I programmed almost exclusively in C on Unix, MS-DOS, and Windows.
Another new era began in 1994 when I started doing object-oriented programming in C++ on Windows. I fell in love with OO, but C++ I wasn’t so sure about. Five years later I came across the Eiffel language, and my feelings for C++ quickly spiraled toward “contempt.”
The following year, 2000, I made the switch to Java and I’ve been working in Java ever since.
About now, it time for the one that follows Java (the virtual machine, garbage collection, no pointers, everything is an object) paradigm.
Copute influenced by
The Wikipedia page for each computer language, lists the languages that it was influenced by.
Copute influenced by in chronological ascending order:
C++
PHP
JavaScript
HaXe
Haskell
Scala
Copute influenced by degree of importance/influence descending order:
HaXe
Haskell
Scala
JavaScript
PHP
C++
P.S. I misread the charts at indeed.com are "percentage growth", not "percentage rate of growth". Thus the 2500% growth of Scala jobs and -50% loss of Cobol jobs, is cumulative, not a variable rate of growth (i.e. distance not 1st derivative = velocity). Looks like Scala's 2500% growth was (afair) primarily in the past 2 years, so it is still an astronomic rate, if sustained, but in any case, the # of years would be significantly more than I computed previously. Also the rate of growth may be slowing already (I would need to back and study the chart carefully).
Copute influenced by in chronological ascending order:
C++
PHP
JavaScript
HaXe
Haskell
Scala
Copute influenced by degree of importance/influence descending order:
HaXe
Haskell
Scala
JavaScript
PHP
C++
P.S. I misread the charts at indeed.com are "percentage growth", not "percentage rate of growth". Thus the 2500% growth of Scala jobs and -50% loss of Cobol jobs, is cumulative, not a variable rate of growth (i.e. distance not 1st derivative = velocity). Looks like Scala's 2500% growth was (afair) primarily in the past 2 years, so it is still an astronomic rate, if sustained, but in any case, the # of years would be significantly more than I computed previously. Also the rate of growth may be slowing already (I would need to back and study the chart carefully).
Doug Pardee rebuttal
http://creativekarma.com/ee.php/weblog/comments/death_to_the_liskov_substitutability_principle/
The problem is not to reuse monoliths but to make interfaces granular enough to reuse the components of the monoliths.
http://creativekarma.com/ee.php/weblog/comments/static_typing_and_scala/
Although I agree that to some extent Scala has some syntax and paradigm complexity overload, afaics you missed the key point that although imperative code does not impact the referential transparency (mutability) of its containing function, FP code is required to obtain parallelism:
https://goldwetrust.forumotion.com/t112p90-computers#4061
I think much of Scala's perceived complexity is the way Scala has been explained so far has been very obtuse, the syntax is just different enough from the C++/Java genre to make it hard to read (takes the mind years to become as comfortable reading a new syntax as it did learning to drive a manual transmission), and many paradigms in Scala are not unified (e.g. lazy vals and by-name parameters are really both just automatic function closures). I am working on improving these issues for Copute and adding referential transparency.
http://copute.com/dev/docs/Copute/ref/intro.html
The problem is not to reuse monoliths but to make interfaces granular enough to reuse the components of the monoliths.
http://creativekarma.com/ee.php/weblog/comments/static_typing_and_scala/
Although I agree that to some extent Scala has some syntax and paradigm complexity overload, afaics you missed the key point that although imperative code does not impact the referential transparency (mutability) of its containing function, FP code is required to obtain parallelism:
https://goldwetrust.forumotion.com/t112p90-computers#4061
I think much of Scala's perceived complexity is the way Scala has been explained so far has been very obtuse, the syntax is just different enough from the C++/Java genre to make it hard to read (takes the mind years to become as comfortable reading a new syntax as it did learning to drive a manual transmission), and many paradigms in Scala are not unified (e.g. lazy vals and by-name parameters are really both just automatic function closures). I am working on improving these issues for Copute and adding referential transparency.
http://copute.com/dev/docs/Copute/ref/intro.html
Last edited by Shelby on Sat Aug 06, 2011 1:14 pm; edited 4 times in total
Compare explanations of mixins
My explanation:
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
Creator of Scala's explanations:
http://www.scala-lang.org/node/117
pg 5, 2.2 Modular Mixin Composition section of Scalable Component Abstractions, Odersky & Zenger, Proceedings of OOPSLA 2005, San Diego, October 2005 (note Copute reverses the inheritance list order).
Which one is more complete, concise, and comprehensible? Don't answer in this thread please.
Here is a worse explanation from out of on the web:
http://debasishg.blogspot.com/2006/04/scala-compose-classes-with-mixins.html
P.S. I post to my goldwetrust.forumotion.com forum, so I have copies of my thoughts to refer back to, so I won't lose the information. It has nothing to do with wanting to target a programmer audience of readers. I don't want a lot of questions and attention from programmers right now, that will just slow me down with their confusion about what I am trying to design. I get plenty enough design input by reading voraciously what others have done and research papers.
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
Creator of Scala's explanations:
http://www.scala-lang.org/node/117
pg 5, 2.2 Modular Mixin Composition section of Scalable Component Abstractions, Odersky & Zenger, Proceedings of OOPSLA 2005, San Diego, October 2005 (note Copute reverses the inheritance list order).
Which one is more complete, concise, and comprehensible? Don't answer in this thread please.
Here is a worse explanation from out of on the web:
http://debasishg.blogspot.com/2006/04/scala-compose-classes-with-mixins.html
P.S. I post to my goldwetrust.forumotion.com forum, so I have copies of my thoughts to refer back to, so I won't lose the information. It has nothing to do with wanting to target a programmer audience of readers. I don't want a lot of questions and attention from programmers right now, that will just slow me down with their confusion about what I am trying to design. I get plenty enough design input by reading voraciously what others have done and research papers.
thoughts about the Halting problem with respect to Copute
I think they will find flaws in my work, but hopefully it will only be flaws that I know exist, because they are matters owned by God, e.g. the ability to predict the future. This is why the Halting Theorem says that it is "undecidable" whether a program will ever halt. It means we can't know the answer. Examples are very simple programs called Cellular Automa. They are very simple rules, maybe only 2 or 3, yet the only way to know what value they will create on the trillionth iteration, is to run them a trillion times. We don't know if they will ever halt or reduce the same pattern. Some produce a repeated pattern for a long time, then start producing another pattern. They have patterns within patterns within patterns, that are revealed only as you run them longer.
We can prove that some programs halt, and that is because they are not Turing complete logic machines. They are limited in what they can express.
My job is now is to give the language a way to express that the futures contracts do not exist in portions of the program. Once we isolate where the futures contracts are, we can isolate those portions that can not be composed freely and will always have bugs and will be "undecidable".
P.S. Remember my work where space-time is just a perception. The the mimic octopus illustrates how we can be fooled by our space-time senses.
We can prove that some programs halt, and that is because they are not Turing complete logic machines. They are limited in what they can express.
My job is now is to give the language a way to express that the futures contracts do not exist in portions of the program. Once we isolate where the futures contracts are, we can isolate those portions that can not be composed freely and will always have bugs and will be "undecidable".
P.S. Remember my work where space-time is just a perception. The the mimic octopus illustrates how we can be fooled by our space-time senses.
Last edited by Shelby on Sat Feb 19, 2011 12:36 pm; edited 1 time in total
Copute does not repeat the "Billion Dollar Mistake"
Copute has an orthogonal Maybe type, not automatically unboxed if not checked.
Other languages default types to nullable, which can cause an unchecked exception on every use of every type.
Other languages default types to nullable, which can cause an unchecked exception on every use of every type.
Was Knuth wrong about coroutines?
Tangentially, Donald Knuth is like an idol to many in computer science. He seems to be a very humble, likable, productive, and super intelligent person.
Regarding his statement about coroutines, "Subroutines are special cases of ... coroutines.". He also wrote the following:
Afaics, does he have it backwards, or is the statement arbitrary? Afaics, coroutines are special cases of subroutines.
Think about it. A subroutine which is referentially transparent, will return the same value for the same inputs.
Thus all a coroutine is doing is restructuring the algorithm such as that each "yield" is a return value from a subroutine. I had made this observation earlier:
http://code.google.com/p/copute/issues/detail?id=24
Let me give an example of a coroutines:
Here it is with subroutines
Regarding his statement about coroutines, "Subroutines are special cases of ... coroutines.". He also wrote the following:
Coroutines are analogous to subroutines, but they are symmetrical with respect to caller and callee: When coroutine A invokes coroutine B, the action of A is temporarily suspended and the action of B resumes where B had most recently left o
Afaics, does he have it backwards, or is the statement arbitrary? Afaics, coroutines are special cases of subroutines.
Think about it. A subroutine which is referentially transparent, will return the same value for the same inputs.
Thus all a coroutine is doing is restructuring the algorithm such as that each "yield" is a return value from a subroutine. I had made this observation earlier:
http://code.google.com/p/copute/issues/detail?id=24
Let me give an example of a coroutines:
- Code:
function TwoDimensionWalk( m, n )
{
for( i = 0; i <= m; ++i )
for( j = 0; j <= n; ++i )
{
// do any thing here
yield to Consume
}
}
function Consume()
{
// do any thing here
yield to TwoDimensionWalk
}
Here it is with subroutines
- Code:
function TwoDimensionWalk( m, n )
{
for( i = 0; i <= m; ++i )
for( j = 0; j <= n; ++i )
{
NextJ()
Consume()
}
}
function NextJ()
{
// do any thing here
}
function Consume()
{
// do any thing here
}
Last edited by Shelby on Sat Feb 19, 2011 12:33 pm; edited 1 time in total
Scala was 22,000 to 48,000 LOC to implement
http://lambda-the-ultimate.org/node/1233#comment-13870
Typical programmer will average around 30 delivered, production read LOC per day.
Considering that I might be above average, and considering I might work 50% longer per day, I am looking at on the order of 7 to 26 months to complete Copute's compiler.
Typical programmer will average around 30 delivered, production read LOC per day.
Considering that I might be above average, and considering I might work 50% longer per day, I am looking at on the order of 7 to 26 months to complete Copute's compiler.
Tax strategy
http://esr.ibiblio.org/?p=2931#comment-296156
>I don’t think a “forgone license fee” counts as a loss under any accounting system.
Yeah, there’s no way the IRS would buy that. MS has been able to get massive write-offs by giving away copies of their products to schools and charities, but I don’t think NOK counts as a charity yet.
Lower middleman cost "App Store" is needed?
I think my business model for Copute might be needed by the market from the perspective of both the developers and the users (customers).
http://esr.ibiblio.org/?p=2931#comment-296323
http://esr.ibiblio.org/?p=2931#comment-296332
http://esr.ibiblio.org/?p=2931#comment-296341
http://esr.ibiblio.org/?p=2931#comment-296345
http://esr.ibiblio.org/?p=2931#comment-296365
Wow! So that means a granular contribution monetization model might be much more realistic way to be involved for programmers! Yeah!!!!
http://esr.ibiblio.org/?p=2931#comment-296323
Jacob Hallén Says:
February 12th, 2011 at 11:12 am
With Symbian the customer is owned by the operator. For an app to work (without tons of tedious warnings) , it needs to have a certificate signed by the operator. In exceptional circumstances you can get your certificate signed by the phone manufacturer, but most of the time this doesn’t happen, because the phone manufacturer wants to stay buddies with the operator and get him to subsidise the manufacturers phones.
As an app developer you are stuck in certification hell. It is tedious and damned expensive.
Enter the IPhone. The customer is owned by Apple. They will lease the customer to you on fairly decent terms. You no longer need to deal with hundreds of operators or manufacturers that see you as a threat to their business. This is a large part of the success of Apple in the market. It beats the pants off the Symbian model, because users can get the apps they want with any operator they care to use and the developers have one target to develop for. However, Apple charges a high price for the convenience.
Google sees this and is also worried about being kept out of the walled garden with their ads. They put Android on the market, making it free and making the apps self certified. This casts the users free from the control of both operators and handset manufacturers. It makes the devlopers happy, because they are no longer under the control of Apple.
http://esr.ibiblio.org/?p=2931#comment-296332
# Some Guy Says:
February 12th, 2011 at 12:27 pm
> Apple charges a high price for the convenience.
I wouldn’t call it a high price at all. 30% is quite typical for what we had to pay to distributors back in the days of software in boxes on retail shelves, and we don’t have to deal with the cost of implementing a payment system, etc.
http://esr.ibiblio.org/?p=2931#comment-296341
# Morgan Greywolf Says:
February 12th, 2011 at 2:01 pmI wouldn’t call it a high price at all. 30% is quite typical for what we had to pay to distributors back in the days of software in boxes on retail shelves, and we don’t have to deal with the cost of implementing a payment system, etc.
But we no longer live in a world of boxed software. These days most of us who do buy software do so online. That means the only middle men that are of any consequence are the credit card processors and the transaction clearinghouses. Why do we need Apple? Oh, I forgot, because they make us.
http://esr.ibiblio.org/?p=2931#comment-296345
# tmoney Says:
February 12th, 2011 at 4:05 pm
>Why do we need Apple? Oh, I forgot, because they make us.
And the examples of independent Android developers making their millions without using Android market place are…
http://esr.ibiblio.org/?p=2931#comment-296365
@Some Guy:
Certainly, there are examples of people getting rich off Apple’s app store, just like there are people getting rich from football, from singing, etc.
A lot of people make the assumption that paid apps are the way to go, because it’s easier to make good money with them than via advertising. That makes sense if the average developer can, in fact, make good money on the average app through Apple’s store.
I haven’t checked out this guy’s assumptions or math or numbers, but he goes into great detail, and concludes that the median paid app in Apple’s store earns $682 for its developer.
Wow! So that means a granular contribution monetization model might be much more realistic way to be involved for programmers! Yeah!!!!
Last edited by Shelby on Sun Feb 13, 2011 6:50 am; edited 1 time in total
Page 5 of 11 • 1, 2, 3, 4, 5, 6 ... 9, 10, 11
Page 5 of 11
Permissions in this forum:
You cannot reply to topics in this forum