Computers:
5 posters
Page 7 of 11
Page 7 of 11 • 1, 2, 3 ... 6, 7, 8, 9, 10, 11
additional point: computer programming is the inverse of manufacturing
In the manufacture of goods, the design is replicated over and over again, so if 1 million products are produced and the cost of the design was 100 times the cost to produce each unit, then the design becomes 1/10,000 of the labor input. So this is why patents don't make any sense.
In computer programming, the replication costs nothing, thus the cost of the labor of design is never a small proportion of the cost. In fact, this is why open source trumps closed source, because the cost of labor of design is such a huge proportion of the business model, that 1000s of people sharing effort of design can not be matched by a closed source model.
And this is why taking programming for free is theft and violates the 10 Commandments. Rather it is appropriate to share the cost of the labor of design with all the other users, so the cost is amortized and reduced per unit.
P.S. I hope you are getting a clue that computer programming is very special phenomenon that has no other analog in our human existence. This is why I am becoming so adamant that Copute must be completed.
In computer programming, the replication costs nothing, thus the cost of the labor of design is never a small proportion of the cost. In fact, this is why open source trumps closed source, because the cost of labor of design is such a huge proportion of the business model, that 1000s of people sharing effort of design can not be matched by a closed source model.
And this is why taking programming for free is theft and violates the 10 Commandments. Rather it is appropriate to share the cost of the labor of design with all the other users, so the cost is amortized and reduced per unit.
P.S. I hope you are getting a clue that computer programming is very special phenomenon that has no other analog in our human existence. This is why I am becoming so adamant that Copute must be completed.
What is proposed to go on the Copute.com homepage
http://copute.com
Copute - “massively cooperative software progress”
Copute is two projects under development:
* an open source new computer language to optimize reuse of programming code
* a for-profit repository for a more powerful economic model for open source
Imagine an individual programmer could be paid, from those distributing derivative works for-profit, simply by submitting generally reusable code modules to a repository. Imagine individual programmers could quickly create derivative works without the resources of a large corporation, due to the wide scale reusability of a new computer language model, coupled with such a repository. Imagine the orders-of-magnitude of increased economic activity in information technology that would be unleashed on the world.
Can you help? We are most in need of a software engineer who is (or thinks he/she can quickly become) a compiler expert, and wants to help complete the Copute language. A suitable compensation and incentive package can be discussed, including the option to work from your own home office from any country in the world. Do you have the desire to join the next Google at earliest stage? Can you help us change the world? Do you have this level of passion for your work? Please email us: shelby@coolpage.com
Here follows the economic theory motivating our desire.
Copute - “massively cooperative software progress”
Copute is two projects under development:
* an open source new computer language to optimize reuse of programming code
* a for-profit repository for a more powerful economic model for open source
Imagine an individual programmer could be paid, from those distributing derivative works for-profit, simply by submitting generally reusable code modules to a repository. Imagine individual programmers could quickly create derivative works without the resources of a large corporation, due to the wide scale reusability of a new computer language model, coupled with such a repository. Imagine the orders-of-magnitude of increased economic activity in information technology that would be unleashed on the world.
Can you help? We are most in need of a software engineer who is (or thinks he/she can quickly become) a compiler expert, and wants to help complete the Copute language. A suitable compensation and incentive package can be discussed, including the option to work from your own home office from any country in the world. Do you have the desire to join the next Google at earliest stage? Can you help us change the world? Do you have this level of passion for your work? Please email us: shelby@coolpage.com
Here follows the economic theory motivating our desire.
Open Source Lowers Cost, Increases Value
Closed source can not compete with open source, because the cost of programming is nearly the entire cost for a software company. For example, if the design of a manufactured item costs 1000 times the cost to manufacture 1 unit when a million units are produced, then the design cost is an insignificant 1/1000th of the total cost. The replication and distribution costs of software are approaching zero, thus the design (programming) cost is approaching the total cost. Thus, any company that is not sharing the programming cost with other companies, is resource constrained relative to those companies which are sharing their programming costs. Tangentially, this is why patents on software retard maximum social value obtained from programming investment.
For-profit software companies gain market leverage and market-fitness by diversifying their products and/or services from the shared code at the top level of integration.
With standardized open source license on code, corporations can spontaneously share the cost of programming investment, without the friction, delay, risk, uncertainty, and complexity of multiple, multifarious, bilateral, ad hoc sharing negotiations.
Increased code reuse (sharing) lowers the programming cost for corporations, enabling them to focus capital on their market-fitness integration expertise-- where they generate profits.
Project Granularity of Code Re-use Limits Sharing, Cost Savings, and Value
Open source has reduced the total programming cost to corporations and end users, but due to project-level granularity of code reuse (i.e. sharing), this reduction is not maximized, and costs have not decreased for smaller companies and individual programmers in those cases where they can not spontaneously build derivative code incorporating only parts of projects. Although there has been a boom in diversity and market-fitness of software products, it is not maximized, which is the manifestation of the lack of cost reduction in the aforementioned cases.
The scale of open source reuse is limited by general lack of fine-grained reuse of code inherent in the lack of referential transparency in existing software languages. Instead, we primarily have reuse of entire projects, e.g. multiple companies from the exchange economy sharing programming investment on Linux, Apache server, Firefox browser, etc..It is not easy to reuse sub-modules (i.e. parts) from these projects in other projects, due to the lack of forethought and referential transparency (i.e. too much "spaghetti code" littered with hidden inextricable, interdependencies) in existing software language and design. The granularity of code reuse dictates the scale of shared investment, because for example fewer companies would find an incentive to invest in an entire project than to invest in reuseable code modules within projects, where those modules could be used in a greater diversify of derivative software. Increase the granularity of code reuse by orders-of-magnitude via referential transparency in software languages, then the sharing of programming cost will also increase by orders-of-magnitude. With orders-of-magnitude lower programming cost, then smaller companies and individual programmers will be able to move to the top of the food chain of integration, and there will be orders-of-magnitude faster rate of progress and diversity of market-fitness in software.
Increased Granularity of Re-use Will Transition from Reputation to Exchange Economy
As granularity of reuse of programming code increases via new software languages that incorporate referential transparency paradigms, the economics of contribution will need to become more fungible. Currently the economics of open source is funded by large corporations which have clear strategic paybacks for the projects they choose to invest in. At the project level of granularity, strategic ramifications are reasonable to calculate. However, as granularity of reuse increases, the diversity of scenarios of reuse will increase exponentially or greater, so thus it will become impossible to calculate a payback for a specific contribution.
Although some open source proponents have postulated that the "gift culture"[1] drives contribution, as diversity of contribution increases into smaller parts (instead of project-level contribution), the value of reputation will plummet in areas that reward free contribution. First, it will become much less necessary to gain social status in order to get other people to agree to work with you, as contribution will become disconnected from project objectives and a leader's ability to make a project succeed. Instead, contributors will need another payback incentive. Second, since the economic inputs of large corporations will diminish in significance, the value of reputation in gaining income (a job) will diminish. Instead individual programmers and smaller companies will create their own jobs, by creating more derivative software, given the lower cost of reuse of smaller modules. So the incentive to reuse increases as cost of reuse is lowered via increased granularity of referential transparency, but what will motivate contribution if the value of reputation diminishes? The answer is the value of reciprocity. Those who want to reuse, will become willing to contribute, if others who want to reuse are willing to contribute. This is an exchange economy.
Although some argue that gifting contributions is driven by the incentive to have your contribution merged into the code of the derivative software you use[3], yet with a referentially transparent programming model and automated builds from a repository, customized builds may have no ongoing maintenance. Moreover, if a bug-fix or improvement can increase the value (as defined below) of the source contribution it is fixing even if the price of the source contribution is lowered by the price requested for the fix, then existing derivatives are not discouraged from adopting the fix. Thus, the owner of the source contribution could become the gate-keeper for merging fixes into derivative works, not the owner of the derivative work, which thus decentralizes the process as compared to the gift economic model currently employed for open source.
Studying history, there are no instances where the "gift culture" scaled to an optimal free market. Gift cultures exist only where inefficient social hierarchies are subsidized by abundance, e.g. open source currently funded via large corporate contributions driven by the abundant unmet end user demand for software. However, every year in China, 1 out of 6 million college graduates can't find a job, and many more are underpaid and underutilized, the world is currently at a debt-saturation crossroads, and it is not clear how there will be sufficient abundance of jobs to employ the billions in the developing world. Exchange economies are how the free market optimally anneals best-fit and diversity in times of lack of the subsidy of abundance. Reputation is measured in fungible units, called money or in our case, more specifically the value of current active contributions.
The challenge is how to make the exchange of reusable small module contribution value fungible and efficient. The value of a module of code is determined by its exchange price and the quantity of derivative code that employs it. The price should be increased or decreased to maximize this value. The price is paid periodically in bulk by those distributing for-profit derivative software, not micro-payments. Typically the price would apply to a per installation (not per user) license, where upgrades (fixes and improvements) might be free to existing installations because the code is continually re-used in new derivative works that pay the price. Thus the proposed exchange economic model has an advantage over the gift model, in that upgrade costs are amortized over derivative software proliferation, instead of the service contract model.[4]
Copyright Codifies the Economic Model
So if for-profit derivative use of a contribution has a price, then contributing stolen code could destroy value. This is copyright, and is analgous in the gift (reputation) economy to the use of copyright in existing open source standard licenses to force that derivative works give credit the source contribution. In each economic model, copyright codifies the restrictions that make that model. Closed source is a command economy, free open source is a gift (reputation) economy, and priced open source is an exchange economy. Note since gift (reputation) economy relies on social status and abundance subsidy, it does not economically maximize spontaneous individual efforts, i.e. similarities to socialism.
- Code:
Economic Model Copyright Restriction
---------------------------------------
Command no derivatives, no copies
Gift no delisting the source
Exchange no selling of stolen source
Human language writing is mostly a gift (reputation) economy, because the writer gains upsell benefits as his ideas and reputation grow, even via those who re-distribute the writings without paying royalties. Programming is different from writing in a crucial aspect-- the end users never read the code and 99.9% of the population are not programmers[2] (and even amongst programmers, most do not read the code of every software they use). In the current hacker gift (reputation) economy model, programmers can gain value by building reputation amongst a smaller circle of peers who know their work well, and given the funding for contribution is coming from large corporations that have strategic interests met by the free sharing model, then prevention of theft adds no value. However, if granularity of reuse incentivizes a move towards more of an exchange economy model, then theft is parasite on value, as there is less upsell benefits when reputation is morphed from small circles of social hierarchy to wider scale exchange value, and so copyright will morph to codify that theft is not allowed. Note that value in the exchange economy is based on quantity of derivative reuse, thus it creates more value to incentivize free reuse where those derivatives will be reused in paid derivatives. Thus the differential pricing in exchange economics, i.e. free use for non-profit purposes or upstart business volumes, and volume discounts for-profit use.
There is not likely to be complete shift from reputation to exchange economics, but rather a blending of the two.
[1] Eric Raymond, Homesteading the Noosphere, The Hacker Milieu as Gift Culture, http://www.catb.org/~esr/writings/homesteading/homesteading/ar01s06.html
[2] http://stackoverflow.com/questions/453880/how-many-developers-are-there-in-the-world
[3] Eric Raymond, The Magic Cauldron, The Inverse Commons, http://www.catb.org/~esr/writings/cathedral-bazaar/magic-cauldron/ar01s05.html
[4] Eric Raymond, The Magic Cauldron, The Manufacturing Delusion, http://www.catb.org/~esr/writings/cathedral-bazaar/magic-cauldron/ar01s03.html
Made significant improvements to the economic theory
The section "Increased Granularity of Re-use Will Transition from Reputation to Exchange Economy" had been significantly improved, as I read and refuted Eric Raymond's arguments against an exchange economic model in his The Magic Cauldron.
Note I am not disagreeing with Mr. Raymond's other points about open source. I disagree with him only on the narrow point of gift (reputation) vs. exchange, as the correct model for maximizing progress in open source. And I don't entirely disagree with him about the value of the gift (reputation) model, as I state at the end of my theory page, that we are likely to see a blending of the two models.
I think the theory is extremely important. If you were an angel investor or software engineer who wants to seriously consider the Copute opportunity, you would want to know the detailed theory of why it is being done, before you leap with your capital or career.
Note I am not disagreeing with Mr. Raymond's other points about open source. I disagree with him only on the narrow point of gift (reputation) vs. exchange, as the correct model for maximizing progress in open source. And I don't entirely disagree with him about the value of the gift (reputation) model, as I state at the end of my theory page, that we are likely to see a blending of the two models.
I think the theory is extremely important. If you were an angel investor or software engineer who wants to seriously consider the Copute opportunity, you would want to know the detailed theory of why it is being done, before you leap with your capital or career.
SPOT
Functional programming by itself is not enough to maximize modularity and composability.
So multi-paradigm languages attempt to mix the best of OOP with the best of FP:
http://c2.com/cgi/wiki?ObjectFunctional
Critical to this is SPOT (single-point-of-truth), which is the ability to localize specification in order to maximize modularity and composability (e.g. enables inversion-of-control or Hollywood principle).
Critical to this is Scala's unique algorithm for multiple inheritance. I improved this in a crucial way, by unconflating the Scala trait into an interface and mixin, and adding static methods. I found this to be necessary to maximize modularity and composability, when for example designing the monadic OOP code that I had sent you the link for in a prior email.
If you look at other multi-paradigm attempts, e.g. D Language, etc., you find they don't have this unique multiple inheritance, nor the improvements Copute makes to Scala.
What I also did with Copute was generalize purity (referential transparency) to the class model and closures.
So this is why Copute is a major advance over every language I have studied. If I could find Copute's features in an existing language, I would be very happy.
Some links:
http://en.wikipedia.org/wiki/Single_Point_of_Truth
http://en.wikipedia.org/wiki/Single_Source_of_Truth
http://en.wikipedia.org/wiki/Hollywood_principle
Constructors are considered harmful (<--- must read as it has the major points of this post)
So multi-paradigm languages attempt to mix the best of OOP with the best of FP:
http://c2.com/cgi/wiki?ObjectFunctional
Critical to this is SPOT (single-point-of-truth), which is the ability to localize specification in order to maximize modularity and composability (e.g. enables inversion-of-control or Hollywood principle).
Critical to this is Scala's unique algorithm for multiple inheritance. I improved this in a crucial way, by unconflating the Scala trait into an interface and mixin, and adding static methods. I found this to be necessary to maximize modularity and composability, when for example designing the monadic OOP code that I had sent you the link for in a prior email.
If you look at other multi-paradigm attempts, e.g. D Language, etc., you find they don't have this unique multiple inheritance, nor the improvements Copute makes to Scala.
What I also did with Copute was generalize purity (referential transparency) to the class model and closures.
So this is why Copute is a major advance over every language I have studied. If I could find Copute's features in an existing language, I would be very happy.
Some links:
http://en.wikipedia.org/wiki/Single_Point_of_Truth
http://en.wikipedia.org/wiki/Single_Source_of_Truth
http://en.wikipedia.org/wiki/Hollywood_principle
Constructors are considered harmful (<--- must read as it has the major points of this post)
Last edited by Shelby on Sat Nov 26, 2011 7:33 am; edited 3 times in total
Expression Problem
Google it to know what it is.
Only statically typed languages can solve it. It can be solved with functional extension or OOP extension.
I discuss this and especially in Comment #2, I explain why SPOT is superior to ad-hoc polymorphism (duck typing):
http://code.google.com/p/copute/issues/detail?id=39
Only statically typed languages can solve it. It can be solved with functional extension or OOP extension.
I discuss this and especially in Comment #2, I explain why SPOT is superior to ad-hoc polymorphism (duck typing):
http://code.google.com/p/copute/issues/detail?id=39
Programming languages chart
http://www.pogofish.com/types.png
http://james-iry.blogspot.com/2010/05/types-la-chart.html
Copute will be on the "pure" row to right of Clean, in the "higher-kinded" column.
Note that chart does not illustrate whether nominal or structural typing is used and the aggressiveness of type inference.
http://james-iry.blogspot.com/2010/05/types-la-chart.html
Copute will be on the "pure" row to right of Clean, in the "higher-kinded" column.
Note that chart does not illustrate whether nominal or structural typing is used and the aggressiveness of type inference.
JavaCC LL(k) lexer and parser generator
1. Copute's grammar requires the tracking of indenting around newlines, e.g. NLI means newline-indent:
http://copute.com/dev/docs/Copute/ref/grammar.txt
This is done so that Copute can fix some common human errors that occur around newlines, when semicolons ( ; ) are not required to terminate a line of code (as is the case in Copute and JavaScript).
JavaCC automatically generates a lexer, but I can see no way to tell it to track indenting to qualify the newline tokens sent to the parser:
http://www.cs.lmu.edu/~ray/notes/javacc/
http://www.engr.mun.ca/~theo/JavaCC-Tutorial/javacc-tutorial.pdf
http://javacc.java.net/doc/javaccgrm.html
2. JavaCC's .jj BNF specification syntax is too verbose (compared to Copute's grammar format). It might be possible to automatically convert Copute's more terse BNF format to a .jj file, so this issue could possibly be alleviated or mitigated.
3. JavaCC generates Java code and although I could probably interface Java and Scala code (since I want to write the rest of the compiler in Scala), this is fugly. I would like to use Scala (eventually bootstrapped to Copute) code, to make the code elegant and easy-to-read.
4. Looks like JavaCC generates non-optimal parse tables and parsing code.
5. JavaCC appears to be a monolithic (tangled spaghetti) design (apparently I can't even untangle the lexer and parser). I would prefer to build a parser generator that includes multiple composable modules, e.g. the generation of First(k) and Follow(k) sets should be orthogonal to a module that generates a parse table from those sets, and orthogonal yet again to a module that generates Scala code for the parser that integrates with the parse table, etc.. The point is I would like to have source code I can wrap my mind around.
6. I would like a tight coupling of the naming in the BNF file and the class names of the AST.
7. The documentation for JavaCC and JJTree seems overly complex. Surely we can simplify this.
8. The time we spend trying to shoehorn JavaCC, might be enough to code our own superior parser generator. I think the lexer for Copute needs to be hand-coded, not from a DFA generator.
> I tend to agree with you sometimes in the long run it is much much better
> to
> code it yourself than rely completely on a library that doesn't do exactly
> what you need, or is just too chaotic for you to ever fully understand
> then
> waste time trying to modify it. Those types of problems tend to get worse
> as the project progresses as well. I also place a very high value on
> fully
> understanding the code I use if I need to modify it, and the best way to
> do
> that is to write it yourself. I find personally I can very quickly
> decide
> which 3rd party libraries I like and which I don't after giving them some
> review, and it seems you've already decided you don't like JavaCC
Additional points:
A) One milestone test of a new language, is to write the compiler for that language in that language (a/k/a bootstrapping).
B) Writing the parser in more modular design, means not only we can understand it more readily, but so can others who might want to help contribute. Although JavaCC has a lot of contributors and users, they may not share our needs and focus, i.e. Java is far in principles, culture, features and focus from Copute and Scala:
http://copute.com/dev/docs/Copute/ref/intro.html#Java
C) Top of my head guess, I doubt the parser generator will take more time to code, than to figure out how to get the lexer we need work with JavaCC's integrated and apparently inseparable lexer.
=================================
http://members.cox.net/slkpg/
http://members.cox.net/slkpg/documentation.html#SLK_Quotations
http://copute.com/dev/docs/Copute/ref/grammar.txt
This is done so that Copute can fix some common human errors that occur around newlines, when semicolons ( ; ) are not required to terminate a line of code (as is the case in Copute and JavaScript).
JavaCC automatically generates a lexer, but I can see no way to tell it to track indenting to qualify the newline tokens sent to the parser:
http://www.cs.lmu.edu/~ray/notes/javacc/
http://www.engr.mun.ca/~theo/JavaCC-Tutorial/javacc-tutorial.pdf
http://javacc.java.net/doc/javaccgrm.html
2. JavaCC's .jj BNF specification syntax is too verbose (compared to Copute's grammar format). It might be possible to automatically convert Copute's more terse BNF format to a .jj file, so this issue could possibly be alleviated or mitigated.
3. JavaCC generates Java code and although I could probably interface Java and Scala code (since I want to write the rest of the compiler in Scala), this is fugly. I would like to use Scala (eventually bootstrapped to Copute) code, to make the code elegant and easy-to-read.
4. Looks like JavaCC generates non-optimal parse tables and parsing code.
5. JavaCC appears to be a monolithic (tangled spaghetti) design (apparently I can't even untangle the lexer and parser). I would prefer to build a parser generator that includes multiple composable modules, e.g. the generation of First(k) and Follow(k) sets should be orthogonal to a module that generates a parse table from those sets, and orthogonal yet again to a module that generates Scala code for the parser that integrates with the parse table, etc.. The point is I would like to have source code I can wrap my mind around.
6. I would like a tight coupling of the naming in the BNF file and the class names of the AST.
7. The documentation for JavaCC and JJTree seems overly complex. Surely we can simplify this.
8. The time we spend trying to shoehorn JavaCC, might be enough to code our own superior parser generator. I think the lexer for Copute needs to be hand-coded, not from a DFA generator.
> I tend to agree with you sometimes in the long run it is much much better
> to
> code it yourself than rely completely on a library that doesn't do exactly
> what you need, or is just too chaotic for you to ever fully understand
> then
> waste time trying to modify it. Those types of problems tend to get worse
> as the project progresses as well. I also place a very high value on
> fully
> understanding the code I use if I need to modify it, and the best way to
> do
> that is to write it yourself. I find personally I can very quickly
> decide
> which 3rd party libraries I like and which I don't after giving them some
> review, and it seems you've already decided you don't like JavaCC
Additional points:
A) One milestone test of a new language, is to write the compiler for that language in that language (a/k/a bootstrapping).
B) Writing the parser in more modular design, means not only we can understand it more readily, but so can others who might want to help contribute. Although JavaCC has a lot of contributors and users, they may not share our needs and focus, i.e. Java is far in principles, culture, features and focus from Copute and Scala:
http://copute.com/dev/docs/Copute/ref/intro.html#Java
C) Top of my head guess, I doubt the parser generator will take more time to code, than to figure out how to get the lexer we need work with JavaCC's integrated and apparently inseparable lexer.
=================================
http://members.cox.net/slkpg/
SLK currently supports the C, C++, Java and C# languages on all platforms. Note that the SLK parsers primarily consist of tables of integers as arrays and a very small parse routine that only uses the simple, and universally supported features of the language. This is in contrast to parser generators that produce recursive descent code. They must force a certain programming model on the user because the parsing code is intermixed with the user action code. SLK places no limits or requirements on the coding style of the user.
http://members.cox.net/slkpg/documentation.html#SLK_Quotations
Performance comment from Fischer & LeBlanc (1991), page 120:
Since the LL(1) driver uses a stack rather than recursive procedure calls to store symbols yet to be matched, the resulting parser can be expected to be smaller and faster than a corresponding recursive descent parser.
Last edited by Shelby on Wed Jun 15, 2011 5:24 am; edited 2 times in total
Downside of multiplicitous syntax
Having one standard way to write common things in a language, makes the code more readily readable (i.e. comprehensible) by a wider audience.
A programming language which requires mentally parsing multifarious syntactical formations increases the difficulty of becoming proficient at reading such a language.
For example Scala has numerous ways to write a method that inputs a String and returns its length as an Int:
And two ways to write methods that accept more than one argument (types are inferred in this example):
===============
Someone else recently mentioned this:
http://stackoverflow.com/questions/8303817/scala-9-ways-to-define-a-method
A programming language which requires mentally parsing multifarious syntactical formations increases the difficulty of becoming proficient at reading such a language.
For example Scala has numerous ways to write a method that inputs a String and returns its length as an Int:
- Code:
def len( s : String ) : Int = s.length
def len( s : String ) : Int { s.length }
def len( s : String ) : Int { return s.length }
val len : String => Int = s => s.length
val len = (s: String) => s.length
val len : String => Int = _.length
object len {
def apply( s : String ) : Int = s.length
}
And two ways to write methods that accept more than one argument (types are inferred in this example):
- Code:
def f( arg1, arg2 ) = arg1 + arg2
def f( arg1 )( arg2 ) = arg1 + arg2
===============
Someone else recently mentioned this:
http://stackoverflow.com/questions/8303817/scala-9-ways-to-define-a-method
Last edited by Shelby on Thu Dec 01, 2011 7:52 pm; edited 2 times in total
FINISHED DRAFT: explanation of the technical rational for Copute
Perhaps the most significant drag on programming productivity and correctness is the inability to fully comprehend all the dependencies and ramifications when interopting with some existing programming code. The severity probably increases exponentially with the size of the existing code.
The difficulty in comprehending existing code, is superficially manifested as lack of familiarity, multiple obscure syntax to accomplish same task, multiple coding styles, and multiple programming paradigms. These issues are amplified fundamentally when there is not a localization (a/k/a "separation") -of-concerns, i.e. that modules of code cannot be entirely understood in isolation. For example, if a module has ramifications on the overall state-machine that reach into other modules, then it is difficult to fully comprehend the single module, without understanding the other modules. The tentacles of cross-dependencies (a/k/a "spaghetti") proliferate into exponential states, thus exponentially increasing the time and complexity required to overcome the unfamiliarity of each module. Beyond some threshold of complexity of tangled dependencies, it becomes impractical to reason about how modules will interact (even for someone familiar with the existing code), thus composition of existing code is gambling (indeterminate).
The ability to extend existing code deterministically while minimizing comprehension overhead is the fundamental reason that programmers cannot interopt with maximum efficiency and thus software engineering follows a negative scaling law, known as The Mythical Man-Month:
http://en.wikipedia.org/wiki/The_Mythical_Man-Month#Ideas_presented
http://johntron.com/communication/the-cathedral-bazaar-mythical-man-and-bees/
The only known positive scaling law of software engineering is the brute-force style of open source where an excess pool of labor is thrown at the problem by removing the centralized control of reading code, but with centralized control over writing code to the official trunk:
http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s05.html
http://www.catb.org/~esr/writings/cathedral-bazaar/linux1_d50_48kbs.mp3 (listen at 4:20)
The brute-force method does not address the fundamental problem of modularity of INDEPENDENT code extension (e.g. localization or separation-of-concerns), but rather provides for wider-scale reading participation while not removing the bottleneck of centralized control over extension and comprehension of large trunks of code.
Solving the fundamental problem of modularity of orthogonal code extension is likely to result in an exponential increase in software innovation, because the problem degrades and penalizes extension at an exponential rate, due to the exponential proliferation of tangled conflated states between modules.
This old problem was eventually named "The Expression Problem" by Philip Wadler in 1998, which basically says that "The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts)".
One failed attempt to deal with this problem is the virtual method for an OOP class, which breaks the rules of the Liskov Substitution Principle. For example, if there is a class CSet which has an Add method, which by default adds a new member to the set, and if this is Add method is virtually overridden by a new subtype CBag, which only adds unique members, then a function which inputs a CSet does not expect Add to fail when the member is not unique. Another example is a CRectangle, which has a virtual method to set the length of a single side, but the subclass CSquare will set the length of both sides.
The problem with virtual methods is that there is a default implementation which is not substitutable for the overridden implementation. The reference to the default implementation expects it to be substitutable. Thus the solution is to use instead a referenceable interface, which specificies method signature(s), but provides no default implementation. When this interface is referenced (i.e. a function inputs an instance of the interface and calls a method of the interface), the referrer is not expecting a default implementation and thus there is no substitutability violation. Ideally the implementation (the class or mixin) is never referenced directly, rather only via an interface.
Thus the key to composability (i.e. localization/separation-of-concerns) is the granularity of composing interfaces, reusable implementations of interfaces (i.e. mixins), and the declaration that a method is referentially transparent (i.e. does not involve state). To achieve this granularity and localization it is necessary to have Scala's linearized multiple inheritance, a separation of Scala's trait into interface and mixin (where mixin can never be referenced, only used in inheritance), and adding static methods to interfaces to abstract instance construction, as discussed by Bracha:
https://goldwetrust.forumotion.com/t112p135-computers#4191
As far as I can see, the ability to mixin existing interfaces, is sufficient to solve the Expression Problem and it is not necessary to use other esoteric typing features offered by Scala:
http://www.scala-lang.org/docu/files/IC_TECH_REPORT_200433.pdf
There is no known computer language that adequately addresses the Expression Problem, because they lack some of the aforementioned requirements. For example, Haskell has referential transparency (but not optional, except via the obtuse State monad), and has type classes which are like interfaces:
http://en.wikibooks.org/wiki/Haskell/Class_declarations
But as far as I can see, Haskell is lacking linearized multiple inheritance and static methods in type classes. Apparently there is no subtyping of implementation in Haskell (there is inheritance of "class" which is like an interface), so inheritance of implementation is only ungrouped via the maximally granular per-function inclusion in the "where" of the "instance" declaration. Maximally granular is ideal as the upper limit of implementation mixin granularity, but not also as the lower limit. In Haskell there is no way to inherit groups of methods (a/k/a mixins), nor hierarchal structure. Also, Haskell type inference allows structural subtyping (a/k/a duck typing), where any function that does not declare the input class will match any class that has the invoked methods. Duck typing is thought to violate Expression Problem, because the pre-existing code may be operating on a type which is not a subtype of the intended type:
http://c2.com/cgi/wiki?NominativeAndStructuralTyping
O'Haskell claims to add subtyping, but it has not been studied to see if it provides linearized mixin inheritance.
The difficulty in comprehending existing code, is superficially manifested as lack of familiarity, multiple obscure syntax to accomplish same task, multiple coding styles, and multiple programming paradigms. These issues are amplified fundamentally when there is not a localization (a/k/a "separation") -of-concerns, i.e. that modules of code cannot be entirely understood in isolation. For example, if a module has ramifications on the overall state-machine that reach into other modules, then it is difficult to fully comprehend the single module, without understanding the other modules. The tentacles of cross-dependencies (a/k/a "spaghetti") proliferate into exponential states, thus exponentially increasing the time and complexity required to overcome the unfamiliarity of each module. Beyond some threshold of complexity of tangled dependencies, it becomes impractical to reason about how modules will interact (even for someone familiar with the existing code), thus composition of existing code is gambling (indeterminate).
The ability to extend existing code deterministically while minimizing comprehension overhead is the fundamental reason that programmers cannot interopt with maximum efficiency and thus software engineering follows a negative scaling law, known as The Mythical Man-Month:
http://en.wikipedia.org/wiki/The_Mythical_Man-Month#Ideas_presented
http://johntron.com/communication/the-cathedral-bazaar-mythical-man-and-bees/
The only known positive scaling law of software engineering is the brute-force style of open source where an excess pool of labor is thrown at the problem by removing the centralized control of reading code, but with centralized control over writing code to the official trunk:
http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s05.html
http://www.catb.org/~esr/writings/cathedral-bazaar/linux1_d50_48kbs.mp3 (listen at 4:20)
The brute-force method does not address the fundamental problem of modularity of INDEPENDENT code extension (e.g. localization or separation-of-concerns), but rather provides for wider-scale reading participation while not removing the bottleneck of centralized control over extension and comprehension of large trunks of code.
Solving the fundamental problem of modularity of orthogonal code extension is likely to result in an exponential increase in software innovation, because the problem degrades and penalizes extension at an exponential rate, due to the exponential proliferation of tangled conflated states between modules.
This old problem was eventually named "The Expression Problem" by Philip Wadler in 1998, which basically says that "The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts)".
One failed attempt to deal with this problem is the virtual method for an OOP class, which breaks the rules of the Liskov Substitution Principle. For example, if there is a class CSet which has an Add method, which by default adds a new member to the set, and if this is Add method is virtually overridden by a new subtype CBag, which only adds unique members, then a function which inputs a CSet does not expect Add to fail when the member is not unique. Another example is a CRectangle, which has a virtual method to set the length of a single side, but the subclass CSquare will set the length of both sides.
The problem with virtual methods is that there is a default implementation which is not substitutable for the overridden implementation. The reference to the default implementation expects it to be substitutable. Thus the solution is to use instead a referenceable interface, which specificies method signature(s), but provides no default implementation. When this interface is referenced (i.e. a function inputs an instance of the interface and calls a method of the interface), the referrer is not expecting a default implementation and thus there is no substitutability violation. Ideally the implementation (the class or mixin) is never referenced directly, rather only via an interface.
Thus the key to composability (i.e. localization/separation-of-concerns) is the granularity of composing interfaces, reusable implementations of interfaces (i.e. mixins), and the declaration that a method is referentially transparent (i.e. does not involve state). To achieve this granularity and localization it is necessary to have Scala's linearized multiple inheritance, a separation of Scala's trait into interface and mixin (where mixin can never be referenced, only used in inheritance), and adding static methods to interfaces to abstract instance construction, as discussed by Bracha:
https://goldwetrust.forumotion.com/t112p135-computers#4191
As far as I can see, the ability to mixin existing interfaces, is sufficient to solve the Expression Problem and it is not necessary to use other esoteric typing features offered by Scala:
http://www.scala-lang.org/docu/files/IC_TECH_REPORT_200433.pdf
There is no known computer language that adequately addresses the Expression Problem, because they lack some of the aforementioned requirements. For example, Haskell has referential transparency (but not optional, except via the obtuse State monad), and has type classes which are like interfaces:
http://en.wikibooks.org/wiki/Haskell/Class_declarations
But as far as I can see, Haskell is lacking linearized multiple inheritance and static methods in type classes. Apparently there is no subtyping of implementation in Haskell (there is inheritance of "class" which is like an interface), so inheritance of implementation is only ungrouped via the maximally granular per-function inclusion in the "where" of the "instance" declaration. Maximally granular is ideal as the upper limit of implementation mixin granularity, but not also as the lower limit. In Haskell there is no way to inherit groups of methods (a/k/a mixins), nor hierarchal structure. Also, Haskell type inference allows structural subtyping (a/k/a duck typing), where any function that does not declare the input class will match any class that has the invoked methods. Duck typing is thought to violate Expression Problem, because the pre-existing code may be operating on a type which is not a subtype of the intended type:
http://c2.com/cgi/wiki?NominativeAndStructuralTyping
O'Haskell claims to add subtyping, but it has not been studied to see if it provides linearized mixin inheritance.
Higher Kinded type abstraction (a/k/a generics or polymorphism)
Higher Kinded types are actually quite easy to understand.
See the illustrations for section 5, on pages 6 and 7:
http://wiki.ifs.hsr.ch/SemProgAnTr/files/HigherKindGenericsInScala.pdf
Concrete (a/k/a "proper") types (e.g. Int, List[Int]) can be instantiated (a/k/a "value"). Abstract types (e.g. List[T], Pair[U, V]) are type constructors, i.e. they can construct a concrete type when their type parameters (e.g. T, U, V) are specified.
A higher kinded type system allows the type parameter for an abstract type to be an abstract type, instead of requiring it be a concrete type. Without higher-kinded typing we can do:
Or:
Neither of the above enforce (at compile time, i.e. static typing) the return type of filter for subtypes of Iterable.
Whereas with higher-kinded types we can keep the SPOT (single-point-of-truth) in the base Iterable class:
Note higher-kinded typing can be performed with C++ templates, but this is fugly verbose and remember that templates are reified at compile-time, which afair has some drawbacks one of which is in terms of the Expression Problem.
http://stackoverflow.com/questions/2565097/higher-kinded-types-with-c
The creators of Scala published a paper on their higher-kinded typing:
http://adriaanm.github.com/files/higher.pdf
See the illustrations for section 5, on pages 6 and 7:
http://wiki.ifs.hsr.ch/SemProgAnTr/files/HigherKindGenericsInScala.pdf
- Code:
Value: 5 [1,2,3] abstract abstract abstract
Type: Int List[Int] List[T] Pair[U, V] Iterable[T, Sub[_]]
Kind: * * * -> * * -> * -> * * -> (* -> *) -> *
Concrete (a/k/a "proper") types (e.g. Int, List[Int]) can be instantiated (a/k/a "value"). Abstract types (e.g. List[T], Pair[U, V]) are type constructors, i.e. they can construct a concrete type when their type parameters (e.g. T, U, V) are specified.
A higher kinded type system allows the type parameter for an abstract type to be an abstract type, instead of requiring it be a concrete type. Without higher-kinded typing we can do:
- Code:
class Iterable[T, Sub]
{
filter : T -> Bool -> Sub
}
class ListInt inherits Iterable[Int, ListInt] {}
Or:
- Code:
class Iterable[T]
{
filter : T -> Bool -> Iterable[T]
}
class List[T] inherits Iterable[T]
{
filter : T -> Bool -> List[T] // an override
}
Neither of the above enforce (at compile time, i.e. static typing) the return type of filter for subtypes of Iterable.
Whereas with higher-kinded types we can keep the SPOT (single-point-of-truth) in the base Iterable class:
- Code:
class Iterable[T, Sub[_]]
{
filter : T -> Bool -> Sub[T]
}
class List[T] inherits Iterable[T, List] {}
Note higher-kinded typing can be performed with C++ templates, but this is fugly verbose and remember that templates are reified at compile-time, which afair has some drawbacks one of which is in terms of the Expression Problem.
http://stackoverflow.com/questions/2565097/higher-kinded-types-with-c
The creators of Scala published a paper on their higher-kinded typing:
http://adriaanm.github.com/files/higher.pdf
Last edited by Shelby on Thu Sep 08, 2011 12:05 am; edited 2 times in total
simplifying the Copute grammar
In the attached file, I am overhauling the LL(k) SLK parser grammar that originally appeared here:
http://copute.com/dev/docs/Copute/ref/grammar.txt
http://members.cox.net/slkpg/
Note the attached incomplete grammar is already LL(5), i.e. 5 tokens of lookahead, while the prior grammar was LL(3).
I have significantly increased the documentation in the grammar.
One of the first goals has been to eliminate the sort of multiplicitous syntax readability problem with Scala:
https://goldwetrust.forumotion.com/t112p165-computers#4420
So now in Copute, there is only one way to declare a function which is an expression that can be used where ever a function of the correct type is expected:
And only way way to declare a reference to a function (i.e. for a method), and the type of the function is always declared by what the function is assigned to (imagine also passing the function as an argument to another function), e.g.:
When declared within the body of an interface, mixin, or class, the above is a method (i.e. a function that inputs 'this' in an undeclared first argument). To declare as function that does not input 'this', then the read+write properties should be declared (note constructor arguments are always (public,const), but can be obscured by assigning to a property with same identifier), e.g.:
A method can be declared private:
So thus the interface to a method is just the first part:
And the inherited implementation in class or mixin does not need to restate the function type:
Note the optional default arguments and lazy evaluation declaration is specified where the function is, but the function interface declaration can be separated from the function body, thus in the interface:
And for the implementation in the mixin or class is not required to repeat it:
My other goals are to make type inference more local and consistent enforcement of where type should/must be specified or not. For example, type can not be specified in the function itself, which I think makes reading a function too dense any way:
\s : Int, pos : Int -> s.substring( pos ).length \\ syntax error
For the default case. it is no longer necessary to prepend with 'var' or 'cref', as it is in Scala (e.g. 'def'), so a lower verbosity syntax.
I am also trying to convert some statements (e.g. switch) into expressions to make the syntax more bijective (compatible in both directions) with Scala, since everything is an expression in Scala and there are no statements.
> That's good. I've long been a fan of "one and only one way to do things".
> In fact, I've suggested Scala delete methods and just require methods be
> defined as "val mult = (x: Int, y: Int) => x * y".
Your proposal without the return type inferred would be:
Copute method is declared:
I prefer the type of the instance reference 'mult' to be unconflated from the rvalue declaration. It is crowded to read the arguments (x and y) with the type noise, especially when you add call-by-name (lazy evaluation) and default values annotations to arguments. Also thus non-function types are unified with function types in terms of where the type declaration appears in an instance reference declaration. I prefer -> instead of Scala's => for the function constructor, because the latter looks too much like say +=, and it often appears close to an assignment =, thus making it look like = ... = ..., and in general I don't like overloading the meaning of '=' to mean something other than assignment. Also -> is whitespace friendly and is consistent with Haskell and other languages and I think also some lamba calculus notations.
Copute non-method member which is a function type is differentiated from a method by the explicit declaration of its getter and setter permissions:
http://copute.com/dev/docs/Copute/ref/grammar.txt
http://members.cox.net/slkpg/
Note the attached incomplete grammar is already LL(5), i.e. 5 tokens of lookahead, while the prior grammar was LL(3).
I have significantly increased the documentation in the grammar.
One of the first goals has been to eliminate the sort of multiplicitous syntax readability problem with Scala:
https://goldwetrust.forumotion.com/t112p165-computers#4420
So now in Copute, there is only one way to declare a function which is an expression that can be used where ever a function of the correct type is expected:
- Code:
\s -> s.length
And only way way to declare a reference to a function (i.e. for a method), and the type of the function is always declared by what the function is assigned to (imagine also passing the function as an argument to another function), e.g.:
- Code:
len : Int -> Int = \s -> s.length
When declared within the body of an interface, mixin, or class, the above is a method (i.e. a function that inputs 'this' in an undeclared first argument). To declare as function that does not input 'this', then the read+write properties should be declared (note constructor arguments are always (public,const), but can be obscured by assigning to a property with same identifier), e.g.:
- Code:
len (public,public) : Int -> Int = \s -> s.length
A method can be declared private:
- Code:
private len : Int -> Int = \s -> s.length // as in Scala, private may be qualified with a package name, e.g. private[name]
So thus the interface to a method is just the first part:
- Code:
len : Int -> Int
And the inherited implementation in class or mixin does not need to restate the function type:
- Code:
len = \s -> s.length
Note the optional default arguments and lazy evaluation declaration is specified where the function is, but the function interface declaration can be separated from the function body, thus in the interface:
- Code:
len : Int -> Int -> Int = \s, &pos = 0
And for the implementation in the mixin or class is not required to repeat it:
- Code:
len = \s, pos -> s.substring( pos ).length
My other goals are to make type inference more local and consistent enforcement of where type should/must be specified or not. For example, type can not be specified in the function itself, which I think makes reading a function too dense any way:
\s : Int, pos : Int -> s.substring( pos ).length \\ syntax error
For the default case. it is no longer necessary to prepend with 'var' or 'cref', as it is in Scala (e.g. 'def'), so a lower verbosity syntax.
I am also trying to convert some statements (e.g. switch) into expressions to make the syntax more bijective (compatible in both directions) with Scala, since everything is an expression in Scala and there are no statements.
> That's good. I've long been a fan of "one and only one way to do things".
> In fact, I've suggested Scala delete methods and just require methods be
> defined as "val mult = (x: Int, y: Int) => x * y".
Your proposal without the return type inferred would be:
- Code:
val mult = (x: Int, y: Int): Int => x * y
Copute method is declared:
- Code:
mult : Int -> Int -> Int = \x, y -> x * y
I prefer the type of the instance reference 'mult' to be unconflated from the rvalue declaration. It is crowded to read the arguments (x and y) with the type noise, especially when you add call-by-name (lazy evaluation) and default values annotations to arguments. Also thus non-function types are unified with function types in terms of where the type declaration appears in an instance reference declaration. I prefer -> instead of Scala's => for the function constructor, because the latter looks too much like say +=, and it often appears close to an assignment =, thus making it look like = ... = ..., and in general I don't like overloading the meaning of '=' to mean something other than assignment. Also -> is whitespace friendly and is consistent with Haskell and other languages and I think also some lamba calculus notations.
Copute non-method member which is a function type is differentiated from a method by the explicit declaration of its getter and setter permissions:
- Code:
mult (public, public) : Int -> Int -> Int = \x, y -> x * y
- Code:
sources:
{ source }+
source:
class
reference
expression
ifElse // TODO: move this to 'primary' and change 'then' to use 'exprSansIfelse' instead of 'expression', and add 'else' to that line
scope:
'{' [ NL ] sources [ NL ] '}'
reference:
[ refType ] ID typeInit // without refType, defaults to 'cref' for pass-by-reference, and 'var' for pass-by-value, instances
refType ID init // init without type is only allowed if prepended with a refType, because otherwise ambiguous with with an assignment expression, e.g. no way to obscure a reference of same name in outer scope or assignment expression reference name typo silently accepted as construction of new reference
refType:
'const' // the instance can not be modified, regardless if it is pass-by-reference or pass-by-value
'ref' // the reference can be modified to point to a different instance, applies only to pass-by-reference instances
'const' 'ref' // combines the above meanings, note the reference is not constant.
'var' // 'const' 'var' is not allowed because has same meaning as 'const'
'cref' // 'const' 'cref' is not allowed because has same meaning as 'const'
typeInit:
: nfType [ [ NL ] typeInit2 ] // init is optional if cType has a default constructor
tArgList [ NL ] ':' fType [ NL ] '=' [ NL ] function
//init when refType is not specified, this is ambiguous with an assignment expression. Disambiguate from assignment when ID is not already declared. Note this means a typo on ID will silently fail assignment and create a reference that is never accessed (so maybe we can detect on compilation and error). To hide an ID of same name in outer scope, then refType must be specified. If we did not allow assignment to declare new references, then mixin inheritance of an interface would require verbose 'cref' prepended to method implementations (due to the type being already specified in the interface).
init:
'=' [ NL ] rightExpr
typeInit2:
init
nextType [ NL ] '=' [ NL ] function
nfType:
cType // include standard classes for the builtin types, Int, Float, String, Bool
fType:
nfType [ NL ] nextType
fType2
fType2:
'(' fType ')' [ NL ] nextType
'void' [ NL ] '->' [ NL ] xtype // 'void' not an instance reference type, thus can not be a class, only allowed as a single argument or return type
xtype:
nfType
'(' fType ')'
type:
nfType [ [ NL ] nextType ]
fType2
nextType:
'->' [ NL ] xtype [ [ NL ] nextType ]
'->' [ NL ] 'void' // 'void' not an instance reference type, thus can not be a class, only allowed as a single argument or return type
function:
fDecl [ NL ] fBody
fDecl:
[ purity ] '\' [ fArgs ]
fArgs:
fArg [ [ NL ] ',' [ [ NL ] fArgs ] ]
fArg:
[ '&' ] ID [ init ] // the optional '&' is for pass-by-expression, i.e. lazy evaluation, i.e. argument expression is implicitly wrapped in a function call that returns the evaluated expression. Note not allowed for return type.
purity:
'impure'
'mutable'
fBody:
'->' [ NL ] rightExpr
scope
expression:
assign
rightExpr
assign:
leftExpr [ NL ] assignop [ NLI ] rightExpr
leftExpr:
ID { [ NL ] memberRef } // only a named reference, or its class properties, may be assigned to. This simplifies our grammar significantly. Since we are targetting pure programming, we discourage the assignment to a reference which is an unsaved return value-- because in pure programming we can only return a new instance and it thus has not been saved any where, thus would only be useful for impure, side-effect programming. Such impure programming can still be done (save a return value to a reference before assigning to it). Note, a method which inputs a 'void' can be called without '()', so semantic analysis should not allow assign to return value of such, nor its class properties.
//conflicts with call below: '(' assign ')' { [ NL ] memberRef }+ // an assign has saved a reference in an ID, so we can assign to its class properties.
memberRef:
'.' [ NLI ] ID // no [ NL ] after the operator, http://code.google.com/p/copute/issues/detail?id=31
rightExpr:
boolOr [ [ NL ] '?' [ NLI ] assign [ NL ] ':' [ NLI ] assign ] // aka 'ternary' operator
boolOr:
boolAnd { [ NL ] '||' [ NLI ] boolAnd }
boolAnd:
bitOr { [ NL ] '&&' [ NLI ] bitOr }
bitOr:
bitXor { [ NL ] '|' [ NLI ] bitXor }
bitXor:
bitAnd { [ NL ] '^' [ NLI ] bitAnd }
bitAnd:
equality { [ NL ] '&' [ NLI ] equality }
equality:
compare { [ NL ] equalityop [ NLI ] compare }
compare:
extrapolate { [ NL ] compareop [ NLI ] shift }
shift:
concat { [ NL ] shiftop [ NLI ] concat }
concat:
add { [ NL ] '#' [ NLI ] add }
add:
multiply { addop [ NLI ] multiply } // see NLPLUS and NLMINUS, we can't put a NL in front of addop, because NL also separates expr, and unary contains an addop on its left-side in prefixop
multiply:
unary { [ NL ] multiplyop [ NLI ] unary }
unary:
[ prefixop ] [ postfixop ] instance [ postfixop ]
instance:
primary { [ NL ] memberRef } [ callExpr ] // Semantic analysis must determine if 'memberRef' and/or 'callExpr' is allowed on 'primary'. Also, a function call might not return an instance (i.e. 'void') for an impure function that has side-effects.
'new' [ NL ] cDecl call
primary:
ID
'(' [ NL ] expression [ NL ] ')'
'(' [ NL ] function [ NL ] ')'
switch
callExpr:
call { callSuffix }
callSuffix:
call
[ NL ] memberRef
call:
'(' [ NL ] fParams [ NL ] ')'
fParams:
expression [ [ NL ] ',' [ [ NL ] fParams ] ]
ifElse:
'if' '(' rightExpr ')' then
then:
[ NLI ] expression // use NLI instead of NL, http://code.google.com/p/copute/issues/detail?id=28
[ NL ] scope [ [ NL ] 'else' block ] // if there is an 'else', force braces on the 'if', http://code.google.com/p/copute/issues/detail?id=21#c2
block:
[ NLI ] expression // use NLI instead of NL, http://code.google.com/p/copute/issues/detail?id=28
scope
switch:
'switch' '(' [ NL ] expression [ NL ] ')' [ NL ] '{' [ NL ] { case } [ default ] '}' // We force the '(' and ')' because, http://code.google.com/p/copute/issues/detail?id=30&can=1
case:
'case' rightExpr ':' [ NL ] sourcesOrEmpty [ NL ] // allow empty case ';', so that user can do noop for specific cases without a default, to make sure all cases are coded
sourcesOrEmpty:
sources
';'
default:
'default' ':' [ NL ] sourcesOrEmpty [ NL ]
class:
genre ID [ [ NL ] tArgList ] [ NL ] cDecl // if INTERFACE, then ID must begin with "I[A-Z]", else "[A-HJ-Z][^A-Z]", http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
anonyClass:
genre [ [ NL ] funcArgs ] [ NL ] cDecl
genre:
'case'
'class'
'mixin'
'interface'
tArgList:
'<' tArgs '>'
tArgs:
[ NL ] tArg [ [ NL ] ',' [ tArgs ] ]
tArg:
[ variance ] ID [ tArgList ] [ tConstrain ] // the 'tArgList' is for a higher-kinded type parameter
variance:
'+'
'-'
tConstrain:
subTypeOf [ NL ] type [ superTypeOf [ NL ] type ]
superTypeOf [ NL ] type
subTypeOf:
':'
superTypeOf:
'<:'
cDecl:
[ inherit [ NL ] ] [ constructor [ NL ] ] cBody // semantic analysis must not allow 'constructor' for 'interface' and 'mixin'
inherit:
'inherits' cTypes
constructor:
[ 'private' [ NL ] ] fDecl
cTypes:
[ NL ] cType [ [ NL ] ',' [ cTypes ] ]
cType:
ID [ tParamList ] // ID is class
anonyClass
tParamList:
'<' types [ NL ] '>'
cBody:
'{' [ [ NL ] cMember { [ ',' ] [ NL ] cMember } ] [ NL ] '}'
cMember:
[ 'private' ] cMemberSuffix
cMemberSuffix:
cmBody
class // Note, this can be a CASE
cmBody:
[ 'static' ] ID [ etters ] [ [ NL ] typeInit ] // this can be a 'CASE ID', when 'genre' is 'interface', because no ID without 'etters' in 'interface'
'case' ID
etters:
'(' access ',' access ')' // The modifier that precedes ID must not be 'private'
access:
'public'
'new'
'private'
// Operator sets
prefixop:
addop // http://stackoverflow.com/questions/2624410/what-is-the-purpose-of-javas-unary-plus-operator
NLPLUS // http://code.google.com/p/copute/issues/detail?id=23, where this terminal will conflict with an NL or addop expected, otherwise process as PLUS...
NLMINUS // ...or MINUS
'!'
'~'
TYPEOF // can be used with type parametrization to fork functionality, although this is equivalent to overloading and/or using an case class, so I may deprecate this
postfixop:
'++'
'--'
multiplyop:
'*'
'/'
'%'
addop:
'+'
'-'
shiftop:
'<<'
'>>'
'>>>'
compareop:
// '<' // use MORE instead and flip operands, conflicts with sequence type ('tParamList')
'>'
'<='
'>='
equalityop:
'=='
'!='
=== // compare the instance data (or overloaded), same as EQUAL for pass-by-value types
!== // compare the instance data (or overloaded), same as NOTEQUAL for pass-by-value types
assignop:
'='
'&='
'|='
'^='
'*='
'\='
'%='
'+='
'-='
'<<='
'>>='
'>>>='
'#='
Last edited by Shelby on Thu Dec 01, 2011 7:52 pm; edited 13 times in total
World Without Web
I like the use of the word serendipity. Now imagine a future where Copute does not happen.
Al Gore didn't create the internet, hackers at DARPA did by lying to the government.
I have to cheer Eric S. Raymond on this one.
http://esr.ibiblio.org/?p=3331#comment-310506
http://esr.ibiblio.org/?p=3331#comment-310483
http://esr.ibiblio.org/?p=3335
http://esr.ibiblio.org/?p=3335#comment-310551
http://esr.ibiblio.org/?p=3335#comment-310626
http://esr.ibiblio.org/?p=3335#comment-310703
Al Gore didn't create the internet, hackers at DARPA did by lying to the government.
I have to cheer Eric S. Raymond on this one.
http://esr.ibiblio.org/?p=3331#comment-310506
Yes, it’s true. I was there [in 1983]. See “a roomful of hackers with an innate distrust of hierarchy and a strong allergy to system designs with single-point vulnerability”. Our distrust extended to systems vulnerable to political single-point failures as well as technical ones.
http://esr.ibiblio.org/?p=3331#comment-310483
Do you get it yet, Ms. Love? You rely on open-source software written by me and my peers for “day-to-day functionality” every day you touch the Internet or anything that talks to it. But you don’t see it, because we’ve done our job so well that our part of the infrastructure is almost invisible to you. You are free to make snotty remarks about our inability to meet user needs only because we have created for you the luxury of ignorance. And you are able to dismiss our concerns as hysteria only because more layers of ignorance lie between you and the long struggle we have waged against similar power grabs in the past
http://esr.ibiblio.org/?p=3335
World Without Web...
...I’m going to sketch an all-too-plausible alternate history in which the World Wide Web never happened.
The divergence point for this history is in 1983-1984, when the leadership of DARPA lied through its teeth to Congress about who was being allowed access to the Internet. The pretense being maintained was that only direct affiliates of government-sponsored, authorized research programs were being allowed on. DARPA knew very well this wasn’t true; they had a broader vision of where the Internet might lead, one that wouldn’t be realized for another ten years. They viewed the yeasty, chaotic culture of casual use by all manner of ‘randoms’ (unauthorized people including, at the time, me) as what would later be called a technology incubator – a vast Petri dish from which amazing serendipities would spring in due time...
http://esr.ibiblio.org/?p=3335#comment-310551
>Sounds like you like government after all.
Remember, I attribute the takeoff of the Internet to DARPA deliberately lying to its political masters. This happened precisely because DARPA’s leadership knew it was impossible within the incentives operating on governments for them to tolerate the ‘randoms’ or the results DARPA foresaw.
http://esr.ibiblio.org/?p=3335#comment-310626
>Further, like most advocates of the state who say “if there was no government who would build roads?”
If you think this what-if qualifies me as an “advocate of the state”, you’re out of your fucking mind.
I don’t know what evolutionary path would have gotten a free market in telecomms to an Internet. I do know that we didn’t get to run that experiment, because there was a government-created telecomms monopoly firmly in place. Even after the AT&T divestiture our infrastructure retained the stamp of that monopoly in ways both large and small. We are just lucky that DARPA subverted the normal centralizing tendency of government at a crucial time.
http://esr.ibiblio.org/?p=3335#comment-310703
>But I will point to the existance of the SF mailing list as evidence that, just as Eric says, the DARPA folks were lying about off-program uset of the Arpanet.
Heh. I’d bet that half the DARPA program monitors were on that list. Here, on information and belief, is how I think their reasoning went:
“This list is fun. It’s also an experiment in a novel mode of communication (most historians, including me, think it was the very first Internet mailing list). As such, it’s legitimately part of our research into the consequences of cheap, robust wide-area networking. And so is letting on all those college kids who don’t technically qualify under the program charter. But if we try to explain that at the Congressional hearings, we’re sure as shit going to get Proxmired.”
It was a reasonable fear. William Proxmire trashed a lot of immensely valuable research in the course of his political grandstanding. Once or twice he even apologized afterwards, but the damage had been done. I’ve had it pretty strongly hinted to me that certain of those responsible believed lying to Congress was less risky than telling the truth where Proxmire could get wind of it.
I suspect the Internet program managers were neither the first nor the last to make that choice. And be justified in making it.
Last edited by Shelby on Mon Jun 20, 2011 7:01 am; edited 2 times in total
Ahhh, State monad and Traversable come clear in my mind and my explanation
Finally I have a very good understanding and explanation of State monad, Traversable, and everything I have been working on this file.
Also, I have generalized the Traversable over the state-of-the-art in published research.
I am very satisfied with my explanation and code for the State monad, as well as the examples I gave for using it with ITraversable. Try and try and I doubt you will find any equivalently concise layman example implementations of State monad that are not in the obtuse Haskell syntax. Finally I can say that a common person can fully understand the State monad (and Traversable, Applicative, etc) without pulling their hair out for days and weeks (as I and many others have done).
Also I have now a sentence that begins with "Conceptually," at the start of the documentation for each of these (IFunctor, IApplicative, IMonad, State, ITraversable), which fully explains the significance of each one. Why should I care or use this? I have the answers succinctly now.
It is also illuminating that the Copute syntax is darn slick, concise, and not confusing.
I am ecstatic with this accomplishment and milestone.
Note this file is still a little bit incomplete (has some incorrect code in some of the mixin implementations).
I have attached my latest draft of the Copute grammar which is incompete and driving towards improving on what is at the Copute.com website, as discussed here:
https://goldwetrust.forumotion.com/t112p165-computers#4426
Also, I have generalized the Traversable over the state-of-the-art in published research.
I am very satisfied with my explanation and code for the State monad, as well as the examples I gave for using it with ITraversable. Try and try and I doubt you will find any equivalently concise layman example implementations of State monad that are not in the obtuse Haskell syntax. Finally I can say that a common person can fully understand the State monad (and Traversable, Applicative, etc) without pulling their hair out for days and weeks (as I and many others have done).
Also I have now a sentence that begins with "Conceptually," at the start of the documentation for each of these (IFunctor, IApplicative, IMonad, State, ITraversable), which fully explains the significance of each one. Why should I care or use this? I have the answers succinctly now.
It is also illuminating that the Copute syntax is darn slick, concise, and not confusing.
I am ecstatic with this accomplishment and milestone.
Note this file is still a little bit incomplete (has some incorrect code in some of the mixin implementations).
I have attached my latest draft of the Copute grammar which is incompete and driving towards improving on what is at the Copute.com website, as discussed here:
https://goldwetrust.forumotion.com/t112p165-computers#4426
- Code:
interface IFunctor< Sub<T> : IFunctor<Sub,T>, T+ >
{
map<A> : (T -> A) -> Sub<A>
}
// Functional programming (FP) paradigms are fundamentally about not repeating oneself.
// Functor enables the reuse of all functions of type, T -> A, to which we give the name "morphism functions".
//
// A functor has a 'map' method, which enables all morphism functions of type, T -> A,
// to operate on functors of type, Sub<T>, producing a result of type, Sub<A>.
// Conceptually, a functor maps all functions of type, T -> A, to functions of type, Sub<T> -> Sub<A>
//
// This enables the reuse of these morphism functions, without requiring specialized versions of type, Sub<T> -> Sub<A>.
// For example, if List<T> is functor, then a function of type, Int -> String, can convert a List<Int> into a List<String>.
// There is no need to code a specialized morphism function of type, List<Int> -> List<String>.
//
// A functor has some computation, data structure, or other form of state wrapped around the instance(s) of any type T.
// Thus all functions between unwrapped types are automatically lifted to functions between the wrapped types.
//
// The parametrized subtype, Sub<T>, has a 'map' method which returns an instance of the subtype, Sub<A>,
// where the type parameter T has been converted to a type A.
//
// The 'map' method inputs a function, T -> A,
// i.e. that function inputs a type T and returns a type A.
//
// Note that since 'map' is a method then it inputs 'this', so when viewed as a function,
// it has the type, Sub<T> -> (T -> A) -> Sub<A>.
//
// For example, given the input parameter to 'map' is a function of type, Int -> String,
// i.e. that inputs an integer and returns a string,
// then a (subtype of IFunctor that is a) list of integers, List<Int>,
// is converted to a list of strings, List<String>.
//
// Mapping is between the same subtype, Sub, i.e. Sub<T> -> Sub<A> and not Sub<T> -> Sub2<A>.
// This is because the subtype may not always be invertible (bijective)
// to the wrapped parametric type, T. For example, since Maybe<T>.none contains no instance of T
// (i.e. it can't be inverted to an unwrapped T), it is conceptually sensible to map it to an empty List<A>.
// However, there is no consistent model that will enable such a mapping and
// hold true in all general cases of destination subtypes.
//
// In category theory, a functor is a model of COMPOSABLE mappings between categories.
// In our case, each category is a concrete realization of the type, Sub<T>,
// i.e. the type parameter T is concrete type.
// Thus, our functor has a mapping from category Sub<T> to Sub<A>, given the morphism function, T -> A.
// A functor follows the laws that insure it is composable:
// http://en.wikipedia.org/w/index.php?title=Functor&oldid=433685926#Definition
// map( g ).map( f ) == map( \x -> f(g(x)) ) F(g) * F(f) == F( g * f )
// u.map( \x -> x ) == u F(id) == id
interface IApplicative< Sub<T> : IApplicative<Sub,T>, T+ > inherits IFunctor<Sub,T>
{
static wrap<A> : A -> Sub<A>
apply<T, A> : Sub<T -> A> -> Sub<A>
}
// Read first the documentation for IFunctor.
//
// Conceptually, an applicative empowers any morphism function, which inputs unwrapped type(s),
// to operate on (an) applicative(s) of those type(s).
// Thus, specialized versions of morphism functions do not need to be written for each applicative subtype.
//
// Using the 'apply' method, a function of type, T -> A, which inputs type T and returns type A,
// can operate on an applicative of type Sub<T>, and return an applicative of type Sub<A>.
//
// Note that if the function inputs multiple parameters, then A will be the curried function type,
// e.g. if the function is of type, T -> B -> C, then the A in T -> A is of type, B -> C.
// Thus the example function would return a value of type C, from two applicative inputs--
// first of type Sub<T>, second of type Sub<B>.
//
// For example, a function of type, Int -> Int -> Int, which adds two integers and returns the result,
// can input two Maybe<Int> and return a Maybe<Int>.
// Thus, if either of the inputs is a Maybe<Int>.none (i.e. "null"), then 'apply' does not call the function
// and a Maybe<Int>.none is returned.
//
// The parametrized subtype of an applicative, Sub<_>, has a static 'wrap' method
// that constructs an instance of itself, Sub<A>,
// from an input argument which is an instance of type, A.
//
// Applicative is a functor that adds a mapping, 'apply', which inputs the morphism function, T -> A,
// wrapped in an instance of its subtype, Sub<T -> A>.
//
// Note that since 'apply' is a method then it inputs 'this', so when viewed as a function,
// it has the type, Sub<T> -> Sub<T -> A> -> Sub<A>.
//
// Wrapping the input morphism function enables multiple 'apply' to be chained,
// so that a multiple parameter morphism function,
// e.g. T -> B -> C -> D where the A in T -> A is of type B -> C -> D,
// can be distributed to multiple instances of applicative, e.g. Sub<T>, Sub<B>, Sub<C>.
// Without the such a model, the equivalent code is excessively verbose
// and must be specialized for each subtype, as shown in examples below, and also at this link:
// http://en.wikibooks.org/wiki/Haskell/Applicative_Functors#Using_Applicative_Functors
//
// For example, given a morphism function of type, Int -> Int -> Int -> Int:
// f : Int -> Int -> Int -> Int = \a, b, c -> a + (b * c)
// Apply this function to 3 lists, List( 1, 2, 3 ), List( 10 ), List( 6, 7 ), as follows:
// List( 1, 2, 3 ).apply( List( 10 ).apply( List( 6, 7 ).apply( List.wrap( f ) )))
// List( 1, 2, 3 ).apply( List( 10 ).apply( List( 6, 7 ).map( f ) ))
// IDEs should optionally allow entry and rendering as if 'apply' and 'map' are operators <*> and <<<
// (TODO: formalize Copute grammar for operator vs. method equivalents):
// List( f ) <*> List( 6, 7 ) <*> List( 10 ) <*> List( 1, 2, 3 )
// f <<< List( 6, 7 ) <*> List( 10 ) <*> List( 1, 2, 3 )
// The result is List( 16, 26, 36, 17, 27, 37 ).
//
// Each applicaion of 'apply' curries the function, f, with the elements of the list:
// List.wrap( f ) == List( f )
// List( 6, 7 ).apply( List( f ) ) == List( f(6), f(7) )
// List( 10 ).apply( List( f(6), f(7) ) ) == List( f(6)(10), f(7)(10) )
// List( 1, 2, 3 ).apply( List( f(6)(10), f(7)(10) ) ) == List( f(6)(10)(1), f(6)(10)(2), f(6)(10)(3), f(7)(10)(1), f(7)(10)(2), f(7)(10)(3) )
// List( f(6)(10)(1), f(6)(10)(2), f(6)(10)(3), f(7)(10)(1), f(7)(10)(2), f(7)(10)(3) ) == List( 16, 26, 36, 17, 27, 37 )
//
// f is a function of type Int -> Int -> Int -> Int List( f ) is type List<Int -> Int -> Int -> Int>
// f(6) is a curried function of type Int -> Int -> Int List( f(6), f(7) ) is type List<Int -> Int -> Int>
// f(6)(10) or f(6,10) is a curried function of type Int -> Int List( f(6)(10), f(7)(10) ) is type List<Int -> Int>
// f(6)(10)(1) or f(6,10,1) returns an Int with value 16 List( 16, 26, 36, 17, 27, 37 ) is type List<Int>
//
// The equivalent code without an applicative model is:
// ref a = List( 6, 7 ), b = List( 10 ), c = List( 1, 2, 3 )
// ref result = List(
// f(a.head, b.head, c.head),
// f(a.head, b.head, c.tail.head),
// f(a.head, b.head, c.tail.tail.head)
// f(a.tail.head, b.head, c.head),
// f(a.tail.head, b.head, c.tail.head),
// f(a.tail.head, b.head, c.tail.tail.head) )
//
// The applicative model expresses the semantics more clearly:
// ref result = List( f ) <*> List( 6, 7 ) <*> List( 10 ) <*> List( 1, 2, 3 )
//
// The Maybe type can wrap the none case:
// Maybe.wrap( f ) <*> Maybe.wrap( 3 ) <*> Maybe.wrap( 4 ) <*> Maybe.wrap( 5 ) == Maybe.wrap( 23 )
// Maybe.wrap( f ) <*> Maybe.wrap( 3 ) <*> Maybe<Int>.none <*> Maybe.wrap( 5 ) == Maybe<Int>.none
//
// The equivalent code without an applicative model is:
// ref a = Maybe.wrap( 3 ), b = Maybe.wrap( 4 ), c = Maybe.wrap( 5 ) // or b = Maybe<Int>.none
// ref result = \ {
// switch( a ) {
// case none: return a
// case is( ia ):
// switch( b ) {
// case none: return b
// case is( ib ):
// switch( c ) {
// case none: return c
// case is( ic ): return Maybe.wrap( f(ia,ib,ic) )
// }
// }
// }
// }()
//
// The above equivalent code is computationally more efficient than the applicative model,
// because the case 'none', can short-circuit the subsequent tests for case 'none'.
// Whereas, the applicative model tests for case 'none' on every 'apply' call, i.e. innermost 'apply' called first,
// and Maybe<Int>.none is returned (instead of for example Maybe.wrap( f(3)(4) )) to be input by the next 'apply':
//
// c.apply( b.apply( a.apply( Maybe.wrap(f) )))
//
// In contrast, since IMonad.bind inputs an unwrapped morphism function that returns a type of Sub<A> instead of A,
// thus IMonad.bind are chained by nesting lamba functions, i.e. outermost bind called first,
// which can thus be short-circuited:
//
// c.bind( \ic -> b.bind( \ib -> a.bind( \ia -> Maybe.wrap(f(ia,ib,ic)) )))
//
// However, this ability to short-circuit has the cost that IMonad does not generally compose:
// The Essence of the Iterator Pattern, Gibbons & Oliveira, sec2.3 on pg 5, sec3 on pg 6, sec3.3 on pg 8
// Applicative programming with effects, McBride & Paterson, sec5 on pg 8
// http://blog.tmorris.net/monads-do-not-compose/
//
// I was originally incorrectly thinking that an empty list would conflict with the semantics of applicative:
// http://goldwetrust.forumotion.com/t112p150-computers#4294 (this link also contains an idea for avoiding exceptions with Nil)
// However, I now realize that an empty list is analgous to Maybe<T>.none,
// such that an empty list should be returned from chained 'apply'.
//
// Every subtype must fulfill the following laws from category theory.
// The Essence of the Iterator Pattern, Gibbons & Oliveira, sec3 on pg 6 (laws insure any applicative expression can be converted to the canonical form)
// Applicative programming with effects", McBride & Patterson, sec2 on pg 3
// (i) wrap id 'ap' u = u u.apply( wrap( \t -> t ) ) == u
// (ii) wrap (*) 'ap' f 'ap' g 'ap' u = f 'ap' (g 'ap' u) u.apply( g.apply( f.apply( wrap( \f,g,u -> f(g(u)) )))) == u.apply( g ).apply( f )
// (iii) wrap f 'ap' wrap x = wrap (f x) wrap( x ).apply( wrap(f) ) == wrap( f(x) )
// (iv) u 'ap' wrap x = wrap (\f -> f x) 'ap' f wrap( x ).apply( f ) == f.wrap( \f -> f(x) )
mixin Applicative< Sub<T> : Applicative<Sub,T>, T+ > inherits IApplicative<Sub,T>
{
map = f -> apply( Sub.wrap(f) )
}
// Default implementation of IFunctor.map given an IApplicative.
//
// IFunctor.map has an input parameter f that is type, T -> A, thus Sub.wrap(f) returns type, Sub<T -> A>,
// and thus 'apply' returns type Sub<A>, which is the return type of IFunctor.map.
interface IMonad< Sub<T> : IMonad<Sub,T>, T+ > inherits IApplicative<Sub,T>
{
bind<A> : (T -> Sub<A>) -> Sub<A>
static join< M<A> : IMonad<M,A>, A > : IMonad<_,M> -> M<A> = nested -> nested.bind( \t -> t )
}
// Read first the documentation for IApplicative.
//
// Conceptually, monad enables morphism functions, T -> A, which operate on unwrapped types,
// to compose with functions, T -> Sub<A>, which return the wrapped subtype.
//
// Normal function composition means given function, f : A -> B, and function, g : T -> A,
// then we can write a composition function, h : (T -> A) -> (A -> B) -> T -> B = \g, f, x -> f( g(x) ),
// which inputs two functions and an instance of type T, then returns an instance of type B.
// Monad allows this composition where even if zero, one or both of the functions are instead, f : A -> Sub<B>, or g : T -> Sub<A>.
//
// To gain insight into why monadic composition is useful, consider that each instance of the Maybe<T> monad,
// contains either an instance of T or the value 'none' (i.e. conceptually similar to 'null').
// Thus functions, g, that return Maybe<A> can be composed with functions, f, that input an A.
// h : (T -> Maybe<A>) -> (A -> B) -> T -> B = \g, f, x -> g(x).map( f )
// Otherwise, we would create boilerplate to test the return value for the 'none' value, as follows.
// h : (T -> Maybe<A>) -> (A -> B) -> T -> B = \g, f, x { t = g(x); -> t != Maybe<A>.none ? f(t) : t }
// But it isn't just the small amount of boilerplate that is avoided for the Maybe monad,
// but different boilerplate for each subtype of monad, some which might be more lengthy. e.g. the State monad.
// And with subtype specific boilerplate, the generalized IMonad interface couldn't be referred to in arguments and return values.
// http://blog.sigfpe.com/2006/06/monads-kleisli-arrows-comonads-and.html
//
// Monad is an applicative model that adds a method 'bind',
// that inputs the morphism function, T -> Sub<A>, which returns Sub<A>.
//
// Functions which operate on unwrapped types, T -> A, can be input to 'map', or composed with the static 'wrap' method,
// and the composition function input to 'bind' (but 'map' might be more optimized).
//
// Note that since 'bind' is a method then it inputs 'this', so when viewed as a function,
// it has the type, Sub<T> -> (T -> Sub<A>) -> Sub<A>.
//
//
// Unlike applicative, a fully generalized composition of monads is not possible.
// The Essence of the Iterator Pattern, Gibbons & Oliveira, sec2.3 on pg 5, sec3 on pg 6, sec3.3 on pg 8
// Applicative programming with effects, McBride & Paterson, sec5 on pg 8
// http://blog.tmorris.net/monads-do-not-compose/
// Because the monad is more powerful in that the outermost of a composition executes before innermost.
// c.bind( \ic -> b.bind( \ib -> a.bind( \ia -> Maybe.wrap(f(ia,ib,ic)) )))
// Which is opposite of applicative.
// c.apply( b.apply( a.apply( Maybe.wrap(f) )))
//
// Monad's 'bind' makes possible a 'join' method that flattens a "wrapping of a wrapping" to a wrapping.
// Imagine a container of container(s) of a type, e.g. List<List<T>>, flattened to a container of that type, e.g. List<T>.
//
// Every subtype must fulfill the following laws from category theory:
// Comprehending Monads, Wadler, pg 3
// Monad laws, Monads for functional programming, Wadler, sec3
// The laws involving join, follow from the laws for a monoid, http://apocalisp.wordpress.com/2010/07/21/more-on-monoids-and-monads/
// (i) map id = id \t -> t.map( \u -> u ) == \t -> t
// (ii) map (f * g) = map f * map g i.map( \t -> f( g(t) ) ) == i.map( \t -> g(t); ).map( \t -> f(t) )
// (iii) map f * wrap = wrap * f \t -> wrap( wrap(t) ).map( f ) == \t -> wrap( f( wrap(t) ) )
// (iv) map f * join = join * map (map f) \t -> join( wrap( wrap(t) ) ).map( f ) == \t -> join( wrap( wrap(t) ).map( wrap(t).map(f) ) )
// (I) join * wrap = id \t -> join( wrap(t) ) == \t -> t
// (II) join * map wrap = id \t -> join( t.map( \u -> wrap(u) ) == \t -> t
// (III) join * map join = join * join \t -> join( t.map( \u -> join(u) ) == \t -> join( join(t) )
mixin Monad< Sub<T> : Monad<Sub<T>,T>, T+ > inherits IMonad<Sub,T>
{
map = f -> bind( \t -> Sub.wrap( f(t) ) )
apply = mf -> mf.bind( \f -> map(f) )
}
// Default implementation of IFunctor.map and IApplicative.apply given an IMonad.
//
// IFunctor.map has an input parameter f that is type, T -> A,
// thus the function, \t -> Sub.wrap( f(t) ), has type, T -> Sub<A>,
// and thus 'bind' returns type Sub<A>, which is the return type of 'map'.
//
// IApplicative.apply has an input parameter mf that is type, Sub<T -> A>,
// thus the function, \f -> map(f), has type, (T -> A) -> Sub<A>,
// because 'map' has the type, (T -> A) -> Sub<A>.
// Thus 'mf.bind' returns type Sub<A>, which is the return type of 'apply'.
interface IMState<S,T+> inherits Monad<IMState<S,_>,T> // IMState<S,_> is partial type application, because IMonad expects Sub<T>, not Sub<T,S>. Scala's partial type application syntax is more verbose, see section 5 of "Generics of a Higher Kind", and: http://stackoverflow.com/questions/6247817/is-it-possible-to-curry-higher-kinded-types-in-scala/6248296#6248296
{
m (public, const) : impure (S -> <(S,T)>)
// Overload 'map' to support morphism functions that operate on state
impure map<A> : impure (<(S,T)> -> A) -> IMState<S,T>
}
interface IState<S,T+> inherits Monad<IState<S,_>,T> // IState<S,_> is partial type application, because IMonad expects Sub<T>, not Sub<T,S>. Scala's partial type application syntax is more verbose, see section 5 of "Generics of a Higher Kind", and: http://stackoverflow.com/questions/6247817/is-it-possible-to-curry-higher-kinded-types-in-scala/6248296#6248296
{
m (public, const) : (S -> <(S,T)>)
// Overload 'map' to support morphism functions that operate on state
map<A> : (<(S,T)> -> <(S,A)>) -> IState<S,T>
}
class MState<S,T+> inherits IMState<S,T>
: (S -> <(S,T)>) = \m // <(S,T)> is a shortcut for Tuple2<S,T>
{
m = m
wrap = \t -> State<S,T>( \x -> <(x,t)> )
bind = \k -> State<S,T>( \x
{
<(y,b)> = m(x) // shortcut for, r : Tuple2 = m(x); y = r._0; b = r._1
-> k(b).m(y)
} )
impure map<A> : impure (<(S,T)> -> A) -> IMState<S,T> = \k -> MState<S,T>( \x
{
<(y,b)> = m(x)
-> <( y, k(y,b) )> // 'k' may be impure if it modifies state referenced by 'y'
} )
}
class State<S,T+> inherits IState<S,T>
: (S -> <(S,T)>) = \m // <(S,T)> is a shortcut for Tuple2<S,T>
{
m = m
wrap = \t -> State<S,T>( \x -> <(x,t)> )
bind = \k -> State<S,T>( \x
{
<(y,b)> = m(x) // shortcut for, r : Tuple2 = m(x); y = r._0; b = r._1
-> k(b).m(y)
} )
map<A> : (<(S,T)> -> <(S,A)>) -> IState<S,T> = \k -> State<S,T>( \x -> k(m(x)) ) // 'k' is pure because it returns a copy of the (potentially modified) state
}
// Read first the documentation for IMonad.
//
// Conceptually, the state monad enables composition of morphism functions, T -> A,
// which do not operate on the wrapped state of type, S,
// with stateful morphism functions that operate on the tuple of the wrapped state, <(S,T)> -> <(S,A)>.
//
// <(S,T)> -> <(S,A)> is short-hand for Tuple2<S,T> -> Tuple2<S,A>,
// where TupleN is a convenient way of passing N instances around in one instance.
//
// Since the stateful morphisms input the wrapped state, the State.map method is used to compose them,
// instead of the 'bind' method. Note 'bind' is called by the default mixin implementation
// of Monad.apply.
//
// The state of the state monad is unbounded sequence of nested functions of type, S -> <(S,T)>.
// This enables 'map' (via its call of 'bind') to thread the state across morphism functions, T -> A,
// which do not operate on the state, S.
// This threading is equivalent to the normal function composition of these morphism functions.
// The benefit of using 'map' is that stateful morphism functions can also be mixed into the composition.
//
// When exclusively not composing impure morphism functions that operate on state (i.e. that do not return a copy of modified state),
// the state monadic composition of morphism functions,
// is not imperative and any order-dependency is no different conceptually than the
// normal composition of any pure functions:
// http://goldwetrust.forumotion.com/t112p180-computers#4434
//
// Method 'bind' inputs a morphism function that does not operate on state,
// and threads (bridges) the state across this function call,
// so the state is available to a nested 'bind', 'map', or 'apply'.
//
// Method 'map' is overloaded to support (both pure and impure) morphism functions that operate on state.
//
// See the example usage of the state monad in the documentation for ITraversable
interface IMonoid< Sub : IMonoid<Sub> >
{
// An instance which is commutative as either operand to append.
// http://en.wikipedia.org/wiki/Identity_element
static identity : Void -> Sub
// An instance with the input instance appended.
append : Sub -> Sub
}
// Monoid has a default instance which is commutative over accumulation, and an associative accumulation function.
//
// Every subtype must fulfill the following laws from category theory:
// append * identity = id \t -> t.append( identity ) == \t -> t
// identity * append = id \t -> identity.append( t ) == \t -> t
// (append a) append b = append (a append b) \t,a,b -> t.append( a ).append( b ) == \t,a,b -> t.append( a.append( b ) )
interface IDualMonoid< Sub : IDualMonoid<Sub> > inherits IMonoid<Sub>
{
prepend : Sub -> Sub
}
// Dual of monoid appends in the opposite direction.
interface IMonoidHigherKind< Sub<T> : IMonoidHigherKind<Sub,T>, T+ > inherits IMonoid<Sub>
{
static identity : Void -> Sub<T>
append<A <: T> : Sub<A> -> Sub<A>
append<A <: T> : A -> Sub<A>
}
// Monoid parametrized on covariant T.
interface IDualMonoidHigherKind< Sub<T> : IDualMonoidHigherKind<Sub,T>, T+ > inherits IMonoidHigherKind<Sub,T>
{
prepend<A <: T> : Sub<A> -> Sub<A>
prepend<A <: T> : A -> Sub<A>
}
// Dual of monoid, parametrized on covariant T, appends in the opposite direction.
interface ITraversable< Sub<T> : ITraversable<Sub,T>, T+ >
{
traverse< F<_> : IApplicative<F,_>, A > : (T -> F<A>) -> F<Sub<A>>
traverse< F<_> : IApplicative<F,_>, A, C > : (Void -> C) -> (T -> C) -> (C -> T -> C) -> (T -> F<A>) -> F<Sub<A>>
accumulate< M : IMonoid<M> > : (T -> M) -> M
accumulate< M<A> : IMonoidHigherKind<M,A>, A > : (T -> A) -> M<A> -> M<A>
static dist< R<_> : ITraversable<R,_>, F<_> : IApplicative<F,_>, A > : R<F<A>> -> F<R<A>> = nested -> nested.traverse( \t -> t )
static concat< R<M> : ITraversable<R,M>, M : IMonoid<M> > : R<M> -> M = nested -> nested.accumulate( \t -> t )
static concat< R<M<A>> : ITraversable<R,M<A>>, M<A> : IMonoidHigherKind<M,A>, A > : R<M<A>> -> M<A> = nested -> nested.accumulate( \t -> t )
}
// Conceptually, traverse iterates the contained instances of the parameter T,
// enabling both mapping and accumulating with a single pass of the iteration.
//
// Each contained instance of T is input to a mapping function, T -> F<A>, which returns an applicative, F.
// Note if A == T, then this mapping function is just IApplicative.wrap.
// These applicative are accumulated by a two parameter function, by employing the applicative model of function application.
// Since traversable might contain only zero or one instances of T,
// accumulation functions 'identity' and 'lift', which respectively input zero and one parameters, are also required.
//
// In Haskell, see [1] (but note 'accum' replaces 'f', added 'identity' and 'lift' parameters to generalize it):
// traverseList : Applicative m => (() -> c) -> (b -> c) -> (c -> b -> c) -> (a -> m b) -> [a] -> m c
// traverseList identity lift accum map [] = pure identity
// traverseList identity lift accum map (head:[]) = pure (lift head)
// traverseList identity lift accum map (head:tail) = (pure accum) <*> (traverse identity lift accum map tail) <*> (map head)
//
// In Copute, where traversable (which is a list in the Haskell example) is generalized to IHeadable:
// traverse = \identity, lift, accum, map
// {
// if( head == Maybe<T>.none )
// -> F.wrap( identity )
// else if( tail == Maybe<T>.none )
// -> F.wrap( lift(head) )
// else
// -> traverse( identity, lift, accum, map, tail ).apply( map( head ).apply( F.wrap(accum) )) // F.wrap(accum) <*> traverse( identity, lift, accum, map, tail ) <*> map( head )
// }
//
// Thus 'map' lifts the first contained instance 'head' to an applicative,
// which is input to the second parameter of 'accum' using the applicative model of function application.
// This is repeated recursively on 'tail', thus 'accum' inputs its own output type as its first parameter.
// Thus the return type is the applicative parametrized on the output type of 'accum'.
//
// We have generalized the traverse previously proposed in literature (see [1]),
// so that it possible to map with no-op accumulation function, or vice versa.
// Also our generalization allows an accumulation function to map the output container type.
// The Haskell example in the literature is obtained from our generalization as follows.
// traverseList f list = traverseList [] (\x -> [x]) (:) f list
//
// Thus traverse distributes a function over each T, and returns the Applicative of Traversable,
// instead of Traversable of Applicative, as IFunctor.map would do.
//
// Thus traverse combines the IFunctor.map and ITraversable.dist into a single walk of T.
//
// State monad can be composed with traverse to fold the container into any state type.
// traverse<IState,String>( \t -> State<Int,String>( \x -> <(x+1,t)> )).m(0) == <(count, ITraversable<_,String>)> // count a collection of String and create a new copy of the collection String
// traverse<IState,Int>( \t -> State<Int,Int>( \x -> <(x+1,t.toInt)> )).m(0) == <(count, ITraversable<_,Int>)> // count a collection of String and map to collection of Int
// traverse<IState,Int>( \x -> 0, \x -> 0, \x -> 0, \t -> State<Int,Int>( \x -> <(x+1,0)> )).m(0) == <(count, 0)> // count a collection of String, without creating a copy of the collection
//
// References:
// [1] Traversing data structures, Applicative programming with effects, McBride & Paterson, sec3 on pg 6
// Idiomatic traversal, The Essence of the Iterator Pattern, Gibbons & Oliveira, sec3.4 on pg 8
mixin Traverse< Sub<T> : Traverse<Sub,T>, T+ > inherits ITraversable<Sub,T>, IMonoidHigherKind<Sub,T>, IHeadable
{
traverse = f ->
{
if( isA<IHead> )
-> f( head ).apply( tail.traverse(f).apply( F.wrap( append ) ) )
else
-> F.wrap( identity )
}
}
// Default implementation of ITraversable.traverse, where given IMonoidHigherKind and IHeadable.
//
// See "traverse f (x : xs) = [ (:) (f x) (traverse f xs) ]" in Section 3 Traversing data structures, Applicative programming with effects, McBride and Paterson
// x is head,
// xs is tail,
// (f x) has type F<A>,
// (traverse f xs) has type F<Sub<A>>,
// (:) is IMonoidHigherKind.append which has type Sub<A> -> A -> Sub<A>,
// and inside the brackets [ and ] implies F.wrap( (:) ) and F.apply between each operand as explained on page 4.
mixin Accumulate< Sub<T> : Accumulate<Sub,T>, T+ > inherits ITraversable<Sub,T>, IHeadable
{
accumulate = f ->
{
if( isA<IHead> )
-> f( head ).append( tail.accumulate(f) )
else
-> M.identity
}
accumulate = \f, start ->
{
if( isA<IHead> )
-> tail.accumulate( f, start.append( f(head) ) )
else
-> start
}
}
// Default implementation of ITraversable.accumulate, where given IHeadable.
- Code:
sources:
{ source }+
source:
class
reference
expression
ifElse // TODO: move this to 'primary' and change 'then' to use 'exprSansIfelse' instead of 'expression', and add 'else' to that line
scope:
'{' [ NL ] sources [ NL ] '}'
reference:
[ refType ] ID typeInit // without refType, defaults to 'cref' for pass-by-reference, and 'var' for pass-by-value, instances
refType ID init // TODO: remove this once we are ready to do type inference for assignment expression, see comment for //int below
refType:
'const' // the instance can not be modified, regardless if it is pass-by-reference or pass-by-value
'ref' // the reference can be modified to point to a different instance, applies only to pass-by-reference instances
'const' 'ref' // combines the above meanings, note the reference is not constant.
'var' // 'const' 'var' is not allowed because has same meaning as 'const'
'cref' // 'const' 'cref' is not allowed because has same meaning as 'const'
typeInit:
: nfType [ [ NL ] typeInit2 ] // init is optional if cType has a default constructor
tArgList [ NL ] ':' fType [ NL ] '=' [ NL ] function
//init when refType is not specified, this is already in the grammar as the assignment expression. Disambiguate from assignment when ID is not already declared. Note this means a typo on ID will silently fail assignment and create a reference that is never accessed (so maybe we can detect on compilation and error). To hide an ID of same name in outer scope, then refType must be specified.
init:
'=' [ NL ] rightExpr
typeInit2:
init
nextType [ NL ] '=' [ NL ] function
nfType:
cType // include standard classes for the builtin types, Int, Float, String, Bool
fType:
nfType [ NL ] nextType
fType2
fType2:
'(' fType ')' [ NL ] nextType
'void' [ NL ] '->' [ NL ] xtype // 'void' not an instance reference type, thus can not be a class, only allowed as a single argument or return type
xtype:
nfType
'(' fType ')'
type:
nfType [ [ NL ] nextType ]
fType2
nextType:
'->' [ NL ] xtype [ [ NL ] nextType ]
'->' [ NL ] 'void' // 'void' not an instance reference type, thus can not be a class, only allowed as a single argument or return type
function:
fDecl [ NL ] fBody
fDecl:
[ purity ] '\' [ fArgs ]
fArgs:
fArg [ [ NL ] ',' [ [ NL ] fArgs ] ]
fArg:
[ '&' ] ID [ init ] // the optional '&' is for pass-by-expression, i.e. lazy evaluation, i.e. argument expression is implicitly wrapped in a function call that returns the evaluated expression. Note not allowed for return type.
purity:
'impure'
'mutable'
fBody:
'->' [ NL ] rightExpr
scope
expression:
assign
rightExpr
assign:
leftExpr [ NL ] assignop [ NLI ] rightExpr
leftExpr:
ID { [ NL ] memberRef } // only a named reference, or its class properties, may be assigned to. This simplifies our grammar significantly. Since we are targetting pure programming, we discourage the assignment to a reference which is an unsaved return value-- because in pure programming we can only return a new instance and it thus has not been saved any where, thus would only be useful for impure, side-effect programming. Such impure programming can still be done (save a return value to a reference before assigning to it). Note, a method which inputs a 'void' can be called without '()', so semantic analysis should not allow assign to return value of such, nor its class properties.
//conflicts with call below: '(' assign ')' { [ NL ] memberRef }+ // an assign has saved a reference in an ID, so we can assign to its class properties.
memberRef:
'.' [ NLI ] ID // no [ NL ] after the operator, http://code.google.com/p/copute/issues/detail?id=31
rightExpr:
boolOr [ [ NL ] '?' [ NLI ] assign [ NL ] ':' [ NLI ] assign ] // aka 'ternary' operator
boolOr:
boolAnd { [ NL ] '||' [ NLI ] boolAnd }
boolAnd:
bitOr { [ NL ] '&&' [ NLI ] bitOr }
bitOr:
bitXor { [ NL ] '|' [ NLI ] bitXor }
bitXor:
bitAnd { [ NL ] '^' [ NLI ] bitAnd }
bitAnd:
equality { [ NL ] '&' [ NLI ] equality }
equality:
compare { [ NL ] equalityop [ NLI ] compare }
compare:
extrapolate { [ NL ] compareop [ NLI ] shift }
shift:
concat { [ NL ] shiftop [ NLI ] concat }
concat:
add { [ NL ] '#' [ NLI ] add }
add:
multiply { addop [ NLI ] multiply } // see NLPLUS and NLMINUS, we can't put a NL in front of addop, because NL also separates expr, and unary contains an addop on its left-side in prefixop
multiply:
unary { [ NL ] multiplyop [ NLI ] unary }
unary:
[ prefixop ] [ postfixop ] instance [ postfixop ]
instance:
primary { [ NL ] memberRef } [ callExpr ] // Semantic analysis must determine if 'memberRef' and/or 'callExpr' is allowed on 'primary'. Also, a function call might not return an instance (i.e. 'void') for an impure function that has side-effects.
'new' [ NL ] cDecl call
primary:
ID
'(' [ NL ] expression [ NL ] ')'
'(' [ NL ] function [ NL ] ')'
switch
callExpr:
call { callSuffix }
callSuffix:
call
[ NL ] memberRef
call:
'(' [ NL ] fParams [ NL ] ')'
fParams:
expression [ [ NL ] ',' [ [ NL ] fParams ] ]
ifElse:
'if' '(' rightExpr ')' then
then:
[ NLI ] expression // use NLI instead of NL, http://code.google.com/p/copute/issues/detail?id=28
[ NL ] scope [ [ NL ] 'else' block ] // if there is an 'else', force braces on the 'if', http://code.google.com/p/copute/issues/detail?id=21#c2
block:
[ NLI ] expression // use NLI instead of NL, http://code.google.com/p/copute/issues/detail?id=28
scope
switch:
'switch' '(' [ NL ] expression [ NL ] ')' [ NL ] '{' [ NL ] { case } [ default ] '}' // We force the '(' and ')' because, http://code.google.com/p/copute/issues/detail?id=30&can=1
case:
'case' rightExpr ':' [ NL ] sourcesOrEmpty [ NL ] // allow empty case ';', so that user can do noop for specific cases without a default, to make sure all cases are coded
sourcesOrEmpty:
sources
';'
default:
'default' ':' [ NL ] sourcesOrEmpty [ NL ]
class:
genre ID [ [ NL ] tArgList ] [ NL ] cDecl // if INTERFACE, then ID must begin with "I[A-Z]", else "[A-HJ-Z][^A-Z]", http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
anonyClass:
genre [ [ NL ] funcArgs ] [ NL ] cDecl
genre:
'case'
'class'
'mixin'
'interface'
tArgList:
'<' tArgs '>'
tArgs:
[ NL ] tArg [ [ NL ] ',' [ tArgs ] ]
tArg:
[ variance ] ID [ tArgList ] [ tConstrain ] // the 'tArgList' is for a higher-kinded type parameter
variance:
'+'
'-'
tConstrain:
subTypeOf [ NL ] type [ superTypeOf [ NL ] type ]
superTypeOf [ NL ] type
subTypeOf:
':'
superTypeOf:
'<:'
cDecl:
[ inherit [ NL ] ] [ constructor [ NL ] ] cBody // semantic analysis must not allow 'constructor' for 'interface' and 'mixin'
inherit:
'inherits' cTypes
constructor:
[ 'private' [ NL ] ] ':' nfType [ NL ] [ nextType [ NL ] ] '=' [ NL ] fDecl
cTypes:
[ NL ] cType [ [ NL ] ',' [ cTypes ] ]
cType:
ID [ tParamList ] // ID is class
anonyClass
tParamList:
'<' types [ NL ] '>'
cBody:
'{' [ [ NL ] cMember { [ ',' ] [ NL ] cMember } ] [ NL ] '}'
cMember:
[ 'private' ] cMemberSuffix
cMemberSuffix:
cmBody
class // Note, this can be a CASE
cmBody:
[ 'static' ] ID [ etters ] [ [ NL ] typeInit ] // this can be a 'CASE ID', when 'genre' is 'interface', because no ID without 'etters' in 'interface'
'case' ID
etters:
'(' access ',' access ')' // The modifier that precedes ID must not be 'private'
access:
'public'
'new'
'private'
// Operator sets
prefixop:
addop // http://stackoverflow.com/questions/2624410/what-is-the-purpose-of-javas-unary-plus-operator
NLPLUS // http://code.google.com/p/copute/issues/detail?id=23, where this terminal will conflict with an NL or addop expected, otherwise process as PLUS...
NLMINUS // ...or MINUS
'!'
'~'
TYPEOF // can be used with type parametrization to fork functionality, although this is equivalent to overloading and/or using an case class, so I may deprecate this
postfixop:
'++'
'--'
multiplyop:
'*'
'/'
'%'
addop:
'+'
'-'
shiftop:
'<<'
'>>'
'>>>'
compareop:
// '<' // use MORE instead and flip operands, conflicts with sequence type ('tParamList')
'>'
'<='
'>='
equalityop:
'=='
'!='
=== // compare the instance data (or overloaded), same as EQUAL for pass-by-value types
!== // compare the instance data (or overloaded), same as NOTEQUAL for pass-by-value types
assignop:
'='
'&='
'|='
'^='
'*='
'\='
'%='
'+='
'-='
'<<='
'>>='
'>>>='
'#='
Last edited by Shelby on Sat Jun 25, 2011 2:56 pm; edited 1 time in total
(Haskell's) Laziness eliminates reasoning about time and space
Which is a positive:
http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#Advantages
But it is also a negative, because you can't then control well time, space, and concurrency:
http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#Allocation_Size_Determinism
https://goldwetrust.forumotion.com/t112p195-computers#4487
http://Copute.com
Skeptical?
| Purity
| | Eager vs. Lazy
On the positive side, lazy purity may be theoretically as fast as imperative, whereas non-lazy purity may add a log factor:
https://goldwetrust.forumotion.com/t112p180-computers#4437
http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#Advantages
...enables composition of functions over a large, sparse (even infinite), or diverse data space without conflating the implementation of the granularity on the space[4], e.g. the imperative example could not do logical recursion on fibs() for list-granularity.
[4]John Hughes (1984). "Why Functional Programming Matters" (¶8 §4, "Glueing Programs Together")
But it is also a negative, because you can't then control well time, space, and concurrency:
http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/
http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#Allocation_Size_Determinism
https://goldwetrust.forumotion.com/t112p195-computers#4487
http://Copute.com
Skeptical?
| Purity
| | Eager vs. Lazy
On the positive side, lazy purity may be theoretically as fast as imperative, whereas non-lazy purity may add a log factor:
https://goldwetrust.forumotion.com/t112p180-computers#4437
Last edited by Shelby on Thu Aug 25, 2011 12:49 pm; edited 4 times in total
Copute's unique feature to maximize modularity and avoid LSP errors
Prior discussion of Gilad Bracha's "Constructors are Harmful":
https://goldwetrust.forumotion.com/t112p135-computers#4191
The following makes some good points, and I applaud the clear explanations of the benefits of the pure lambda calculus.
However, OOP (subtyping) is not arbitrary, because it enables more deterministic partitioning (divide-and-conquer) of the design space, which can aid in the ability to reason about large design spaces (imagine the entire body of software being granularly reusable), as compared to (Haskell's or Scalaz's emulation using implicits) structural subtyping (ad-hoc polymorphism).
I am working on a granular approach to mixing the pure lambda calculus and subtyping.
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/
My followup on my above assertion:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4534
Another of my followups:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
https://goldwetrust.forumotion.com/t112p135-computers#4191
I have refined the solution since I wrote the above. Above I was proposing that static factories in an interface could have their return type parametrized to the implemented subtype class. That does not solve the problem. Static factories in an interface are necessary for other SPOT (single-point-of-truth) and boilerplate elimination reasons, e.g. see my implementation of 'wrap' (a/k/a 'unit') in an IMonad (IApplicative), and notice how much more elegant it is than the equivalent Scalaz. Note SPOT is also a critical requirement for maximizing modularity, i.e. ease of composition and reuse.
Rather to accomplish abstraction of constructors, we need is to nudge programmers to input factory functions, so that any code can be abstracted over another subtype of an 'interface' (i.e. instead of calling a 'class' constuctor directly, then input a factory function which returns the result of calling a constructor, thus the caller can change the subtype being constructed). So the important point is that we want to force programmers to create an 'interface'(s) for all their 'class' methods, which is accomplished by not allowing method implementation (i.e. 'class' nor 'mixin') to be referenced any where except in the 'inherits' declaration and constructor call. This means the type of an instance reference can not contain an identifier which is a 'class' nor 'mixin', thus forcing the type of all instance references to contain identifiers which are an 'interface', i.e. instance references reveal the abstract interface, but do not indicate the implementation.
So Copute will have a crucial difference from Scala and the other contenders (e.g. Ceylon), in that 'interface' and 'mixin' will be separated (not conflated in a 'trait'), and only 'interface' can appear in the type of instance references. Note that in Copute (just like for a 'trait' in Scala) 'mixin' and 'interface' may not have a constructor. Scala's linearised form of multiple inheritance is retained.
Note this unique feature of Copute (along with the elimination of virtual methods, a/k/a 'override') is also necessary to enforce the Liskov Substitution Principle (which relates to the concepts of covariance and contravariance):
(note some of the following specification is out-of-date with current specification in grammar and in my various notes around)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
http://copute.com/dev/docs/Copute/ref/class.html#Static_Duck_Typing
http://copute.com/dev/docs/Copute/ref/function.html#Overloading
The following makes some good points, and I applaud the clear explanations of the benefits of the pure lambda calculus.
However, OOP (subtyping) is not arbitrary, because it enables more deterministic partitioning (divide-and-conquer) of the design space, which can aid in the ability to reason about large design spaces (imagine the entire body of software being granularly reusable), as compared to (Haskell's or Scalaz's emulation using implicits) structural subtyping (ad-hoc polymorphism).
I am working on a granular approach to mixing the pure lambda calculus and subtyping.
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/
On the utility of the FP approach
Referential transparency buys you modularity (the extent to which components of a program can be separated and recombined) and compositionality (the extent to which the whole program can be understood by understanding the components and the rules used to combine them).
Type systems also get more sophisticated to the extent that programs are pure. It’s easier to infer types for pure programs than it is for impure ones, for example. You also gain things like type constructor polymorphism (a.k.a. higher-kinded types) and higher-rank types. It’s my experience that in a sufficiently sophisticated program (such as a compiler for a programming language), you either use these abstractions or you repeat yourself.
Sophisticated type systems then in turn aid in program inference, which is the extent to which the computer (or the programmer with the aid of the computer) can infer the correct program from type information (think tab completion, only moreso). Program inference is currently an active area of research.
As a side-note a lot of OO folks are discovering the functional approach as a tool to aid in modular design. They call it “dependency injection”, “inversion of control”, “interpreter pattern” and the like.
A word on OO and the arbitrary
In OO, as commonly practiced, the choice of distinguished argument is arbitrary. Consider this function:
- Code:
K(x, y) = x
If we were to write this in OO style, then on which object, x or y, should the function K be dispatched? Should it be x.K(y), or y.K(x)? It’s arbitrary.
On “types of languages”
I want to get clear on some concepts. First the question of “types of programming languages”. I don’t think it’s helpful to divide programming languages into “functional”, “object-oriented”, “imperative”, etc. Sure, certain languages are better suited to certain styles of programming, but it’s important to differentiate on essentials and not mere aesthetics.
Functional programming is a restriction that the programmer puts on his programs. Having a language with a compiler that will warn you if you’re breaking referential transparency is helpful, but not essential. I do functional programming in Java, for example, which most people consider an “imperative OO” language.
My followup on my above assertion:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4534
Another of my followups:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
I think it is important to note that OOP and imperative programming are orthogonal concepts (the quote of Jason in the article appears to have conflated them):
http://en.wikipedia.org/wiki/Referential_transparency_(computer_science)#Contrast_to_imperative_programming
Imperative programming is the antithesis of referential transparency, in that the former must be evaluated in a specific order. Whereas, the OOP concepts encapsulation (i.e. information hiding), inheritance, interface, and polymorphism can be referentially transparent.
There is an equivalence (not a dichotomy) between FP and OOP in terms of the "Expression Problem" (Wadler), in that it can be solved by either adding new functions or case classes respectively.
So imperative programming is the "evil" which FP should be criticizing, not OOP.
Replace "equivalence" with "duality" (the mathematics definition, not the "dichotomy" definition where dichotomy means logical opposite complement or mutually exclusive contradiction). FP and OOP are not dichotomic paradigms (rather they are orthogonal, i.e. mutually independent, and can be mixed), and they are duals in terms of their equivalent ability to solve the "Expression Problem".
http://en.wikipedia.org/wiki/Duality_(mathematics)
Note, above I am referring the exclusive notion of FP, i.e. referential transparency. There is also an inclusive notion of FP which muddles definitions and understanding (and leads to conflation of terms):
http://www.artima.com/weblogs/viewpost.jsp?thread=166742
Also, replace "or case classes" with the more general "or subtypes".
http://beust.com/weblog2/archives/000490.html (read from Brian Slesinsky's comment down)
http://lambda-the-ultimate.org/node/2927#comment-43268
The point is the one I already made. Imperative is always the antithesis of side-effects-free. Pure (referentially transparent) is never imperative and vice-versa. They are dichotomies, i.e. mutually exclusive. If I am wrong, I would appreciate an example? (see discussion of your example below)
1) Unless I have misunderstood your description of your point-of-view, it seems you are referring to the inclusive notion, which may be more clear from reading #2 below.
2) Order-of-execution dependence (i.e. imperative) is not side-effects-free, and thus is not pure FP. If your sub-expressions are pure FP, but your composition of them is imperative (sequenced in an execution order), then the dichotomic paradigms composition is not pure (even though the sub-expressions are), i.e. they do not mix without choosing one of the contradictions. If the composition of the pure FP expressions is not execution order dependent, then it is pure and not imperative. Note, I assert that it is possible for a pure function to contain imperative composition which calls only pure functions (the rules I am working on have a few more requirements), thus the contradiction shifts back to pure (this is what I mean by my work on a granular mixing), yet while the function is pure, the internals of the function remain order-dependent and impure.
Referential transparency means the function call can be replaced with its cached return value, any where in the program it was called. It thus follows that order-of-execution does not matter, except in a trivial case. In pure FP, the hierarchy of function calls dictates an order-of-execution only where there is no reuse of a function, because the compiler is free to compute and cache values of a sub-function call before executing its caller, and to execute a pure function with different sets of arguments concurrently.
Rúnar said:
"Pure and imperative are orthogonal concepts"
Disagree, they are dichotomies, i.e. mutually exclusive (dependent because they are contradictory, can not be mixed without one overriding the other). Orthogonal means mutually inclusive (independent, freely intermixed without either paradigm contradicted).
Rúnar said:
"See the referentially transparent imperative program in the post."
I assume you are referring to where you wrote in the article, "We can also do “OO” in Haskell...".
a) Your example is using a monad (with 'do' syntactical sugar to obscure the nested 'bind' calls) to abstract side effects. Although abstraction is an "OO" concept, I assume your point is that purely functional can be written in an imperative style. Let's not conflate "OO" with imperative here, as OOP is orthogonal (not mutually exclusive) to pure FP, i.e. they can be intermixed without a contradiction. Whereas, imperative programming can not be mixed with FP (i.e. not mutually inclusive) without causing some level of the composition to be impure.
b) That example is imperative and not purely functional, because it has side-effects (e.g. printing to stdout). Yes, you can force Haskell to do impure imperative code.
Whereas, for example the State monad can be used to simulate global state in a purely functional (non-imperative) paradigm, i.e. only the pure functions in the composition that access the state are imperatively composed (order-dependent), and pure functions (that do not access the state) remain order-independent, when composed with the former using 'bind'. Here is an excerpt from my work on State monad which is not yet uploaded to my site.
// When exclusively not composing impure morphism functions that operate on state (i.e. that do not return a copy of modified state),
// the state monad composition of morphism functions (using 'bind' and 'map'),
// isolates the order-dependency drawback of global state to those pure morphism functions that operate on state,
// and any composed pure morphism functions, that do not input state, remain order-independent.
// Thus the state monad is a granular composition concurrent programming paradigm.
"This is an imperative program with no side-effects."
No, it is a pure program.
That is why I asked you what your definition of "imperatives" is, because it seems your eyes are fooled by the do syntactical sugar. The types of the pure functions determines whether they are composed purely or imperatively, not the ordering of the syntactical sugar. That composition flexibility is the beauty of the State monad, as I explained already.
"You talk a lot and say little."
I am thinking you did not yet have the epiphany, so you did not absorb what I wrote. Let me try to clarify below.
"I think you’re conflating “order-of-execution” with ordinary causal sequencing. In the expression f (g x), is it important that we execute g x before applying f to it?"
No, I covered that already in my prior reply. Concurrency in the pure lambda calculus is only possible when we reuse functions more than once. The key is that order is not forced when there is reuse, we can calculate the same function concurrently or in opposite order, e.g.
f( g x )( g y ) // or for those who prefer the notation f( g( x ), g( y ) )
"The sequencing isn’t imposed by any side-effects. It’s clear when you look at the type of (>>=) :: m a -> (a -> m b) -> m b. The second action requires the a from the first action (of type m a)"
You are confusing the purity of the functions being composed with the purity of the composition. Indeed the the types of the functions determines whether they are dependent on the prior 'bind' call, but some uses of a monad, a = b. For those cases, the composition is pure and not imperative.
As I said above, do not be fooled by the do syntactical sugar, the types of the composition control whether the composition is pure or imperative. Pure and imperative are opposites. Please try to understand.
Also I forgot to say that printing to the screen (twice) is order dependent. I think you agree there is order-dependence in the example in your article.
I am in rush here and I misstated about a = b. The only relevant point is that the types of the morphism function determine if the composition is pure or imperative. I will try to follow up with a clear example later..
Haha. Thanks.
Blame God for Godel's second incompleteness theorem for the inability to declare a consistent theory of the barrier between pure and imperative.
Pureness is relative (to lack of imperative), like everything else existential. Read Hume.
Impure composition of a pure API can render the order-independence within the pure code imperative relative to the external state. If I accepted your notion of imperative, then everything is imperative, and there is no such thing as pure code. Rather I place the barrier at the border between the pure API and the impure composition, which is deterministic relative to static typing.
You place the barrier for some arbitrary reason on pure monadic composition, which orders the monad state purely, which is not conceptually different than how normal function composition orders the final return value (which could also be considered a state and composed impurely).
I think I had a mistake. As long as the State monad is pure, the imperative code is not the monadic binding of functions that access the state, but rather any impure composition of the resultant state. The IO monad is obviously impure as soon as it calls a system function.
Continuation from the last post on prior page of this thread
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4561
It was censored, but I have a copy here.
I read "Tackling the Awkward Squad" again, and it clearly says at the end of section 2 and 7, that where the IO is used, the program is imperative and impure. So I am not disagreeing with that research paper by Peyton-Jones, who is one of the principle creators of Haskell. He makes the point that IO should normally be at the top-level functions which call into a core of pure functions. And the point of the IO type is to delineate the impure, imperative portions of the code in a language that is otherwise always pure. So when these types are used, Haskell is not a 100% pure, declarative language. And that is a good thing. This is just Haskell's way of mixing pure and impure code.
The World state is not reversible in time, because it is impossible to cache the infinite states of the World for each discrete time (besides time is continuous which is another reason there are infinite states). Haskell does not allow the World in "IO a" to be reversed in time, i.e. all action functions occur in forward time order and thus the functions that input IO are imperative and no longer declarative. Thus, this restriction on order does make the action functions pure, because a pure function must be reversible, because irreversibly is a side-effect. For example, you write some output to the World, then you read some input which depends on the user having the output before making the input. Thus the output is a global side-effect in the World that impacts the input. Contrast this with functional reactive programming, which allows inputs and outputs to occur in any order, i.e. it is declarative and order-independent. Haskell can do pure functional reactive programming, and to interface a stateful world, every functional reactive program will need an impure, imperative "wrapper", which is what IO is.
See also my comments on this page, Pure Functional Language: Haskell
=========================
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4566
Ah I see he uncensored my posts above.
I want to share my impromptu thoughts about why pure FP might be considered declarative. Although there is a structure in function composition which implies a logical order-of-operations, the purity allows the order-of-execution to deviate without any impact on the final result. Structure with order-of-execution independence is one of the general characteristics of a declarative paradigm; whereas, the nature of an imperative program is that the final state depends on the order-of-execution, because the lack of purity means that function state transitions can not be isolated from each other. Seems to be a slam dunk against Rúnar's confusion of terms. Maybe he realized it too:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4570
===========================
http://kawagner.blogspot.com/2007/02/why-monads-are-evil.html?showComment=1311410012629#c7474729809428588396
Shelby wrote:
It was censored, but I have a copy here.
Definitions:
'pure' = referentially transparent
'impure' = 'imperative' = not referentially transparent = referentially opaque
Rúnar said:In my view “imperative” = “monadic”
Thus, you are claiming that 'imperative' = all pure function composition, h : (a -> b) -> (b -> c) -> a -> c; h f g x = f( g x ). Thus you are claiming that 'imperative' is not distinguishable from pure FP.
The proof for the state monad, is the morphism function m : a -> State s b; m t = \x -> g (t,x), may call a function g : (a,s) -> (b,s). The monadic binding is pure function composition on the tuple, e.g. f( g (a,x) ).
That the ordering of pure function composition can produce a final result of type c or (c,s), is irrelevant to the concept of 'imperative'.No it isn’t.
My initial succinct comment was correct, 'imperative' is the antithesis of purity-- there is no other definition that gives imperative any meaning. My followup reply correctly predicted that your definition of imperative is muddled.
My further verbose replies were an attempt to try to understand how you could distinquish some forms of ordering of pure functional composition from others. I was trying to understand and explain what significance you could apply to the abstraction of a monad. The monad abstracts (wraps) some state or computation, so that it is possible to compose functions which map unwrapped types, a -> b, with those that map wrapped types, a -> m a. I explained that for example for the state monad, the composed functions of type, a -> b, do not operate on the state, thus they certainly can't be 'imperative' relative to the state. And as I proved above, the composition of functions of type, a -> State s b, is also pure function composition returning a tuple. You try to make a distinction that monadic ordering determines the result tuple, but the ordering of any pure function composition determines the result. There is no distinction.
The bottom line is that 'imperative' is any function that is not referentially transparent. Period. No corner cases, no muddled definition that has no meaning.
In my next comment, I will explain why the IO monad is impure and thus 'imperative', even in Haskell.
The IO monadic composition in Haskell is claimed[1] to be pure because the World (as abstracted by IO a), and not an instance of type a, is an input and output for each action function, i.e. it is assumed that for a given state of the potentially infinite state-machine in the World, the same World state will always be returned[2]. The problem here is that the World state includes time, and thus it never has the same state. This is what I was referring to when I mentioned Coase's and Godel's theorems. The lambda calculus requires the function to be beta-reducible[3], but the structure of the World is never considered by the Haskell type checker[4][5]. Thus this is a hack to claim[1] that side-effects don't matter to purity, by assuming that the World input type of the of IO action function is computable, but never having the compiler check its infinite structure. In short, the World's structure is uncomputable, which relates to undecidability and the Halting theorem.
So the IO monad is actually impure (i.e. 'imperative') and Haskell is lying about it[5]. There is analogy to virtual methods, which at compile-time comply with the variance of Liskov Substitution Principle for the method parameters and return types, but fail at run-time the pre- and post-condition behavioral semantics of LSP (which is unchecked by most languages). Scala makes it easier to lie about purity, because its type system does not even check for purity, so any kernel function that interacts with the world, is impure.
For a language that allows the declaration of a function's type to include whether it is pure or not (e.g. Copute), as I previously stated, the barrier between 'imperative' (i.e. 'impure') and 'pure' is the determined by the type of the function (the API).
[1] - [5] The links for the footnotes follow in the next reply.
[1] Section 7, Tackling the Awkward Squad, Peyton-Jones, "this is an important caveat...I have not proved it"
[2] Section 2.2, Tackling the Awkward Squad, Peyton-Jones
[3] http://en.wikipedia.org/w/index.php?title=Lambda_calculus&oldid=433678424#Computable_functions_and_lambda_calculus
[4] http://www.reddit.com/r/haskell/comments/8bhir/why_the_io_monad_isnt_a_dirty_hack/c08s5bc
[5] http://kawagner.blogspot.com/2007/02/why-monads-are-evil.html?showComment=1170781260000#c2061819858844484763
This is my final post to this blog page. I hope this helps you see the light.
I read "Tackling the Awkward Squad" again, and it clearly says at the end of section 2 and 7, that where the IO is used, the program is imperative and impure. So I am not disagreeing with that research paper by Peyton-Jones, who is one of the principle creators of Haskell. He makes the point that IO should normally be at the top-level functions which call into a core of pure functions. And the point of the IO type is to delineate the impure, imperative portions of the code in a language that is otherwise always pure. So when these types are used, Haskell is not a 100% pure, declarative language. And that is a good thing. This is just Haskell's way of mixing pure and impure code.
The World state is not reversible in time, because it is impossible to cache the infinite states of the World for each discrete time (besides time is continuous which is another reason there are infinite states). Haskell does not allow the World in "IO a" to be reversed in time, i.e. all action functions occur in forward time order and thus the functions that input IO are imperative and no longer declarative. Thus, this restriction on order does make the action functions pure, because a pure function must be reversible, because irreversibly is a side-effect. For example, you write some output to the World, then you read some input which depends on the user having the output before making the input. Thus the output is a global side-effect in the World that impacts the input. Contrast this with functional reactive programming, which allows inputs and outputs to occur in any order, i.e. it is declarative and order-independent. Haskell can do pure functional reactive programming, and to interface a stateful world, every functional reactive program will need an impure, imperative "wrapper", which is what IO is.
See also my comments on this page, Pure Functional Language: Haskell
=========================
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4566
Okay since you censored my last 2 posts, then I will send you this message via your blog, since I can not seem to find your contact email address any where.
Rúnar wrote:
"Imperative programming is just Kleisli composition"
John Hughes apparently says you are wrong. I assume you know he helped create Haskell, and is very prominent in the field of FP:
http://www.cse.chalmers.se/~rjmh/afp-arrows.pdf
Section 1.3 Arrows as Computations, Programming with Arrows
Hughes wrote:
"monads..., help to structure purely functional code, with not an imperative operation in sight"
Note I am publishing all of the censored comments to the web, including those censored by Tony Morris over at his blog. And I am aware of your remark at Twitter:
http://twitter.com/#!/runarorama/status/84026046692851712
"Feeding a known troll: http://t.co/2ezUkL8 "
Enjoy the bliss.
Ah I see he uncensored my posts above.
Rúnar, thank you for uncensoring my 2 posts.
Let me try to close our discussion amicably.The noise-to-signal ratio in your comments exceeds my tolerance.
You added that. My verbose middle comments contained some errors (perhaps due to exhaustion made after midnight at end of 18 hour work day in an Asian internet cafe with multiple patrons simultaneously loudly singing different songs for hours, also fighting an amoeba taking flagyl, and I have 1 eye). The last 4 comments were written after I got some limited sleep (woken by a noisy parade).
I just realized why we were talking past each other.
Imperative means non-declarative, e.g. no "control flow". The ambiguity is in the definition of "control flow". A pure function creates a state change by creating new state, and not changing the input state. Thus, apparently some claim that pure FP is declarative, thus impurity is (a subset of) imperative.
I could accept that definition. Alternatively, I already implied early in our discussion that I could agree that all non-declarative programming is imperative.
Rúnar, are you saying that only Kleisli composition is imperative? Or, are we in agreement that monadic composition generalizes to normal pure function composition, and thus all pure function composition would be imperative, where you said "state transition" (i.e. state creation) is imperative?
Pure function composition is ordered and causes state creation ("transition") whether it be in a monad, Kleisli, or otherwise. Your apparent (?) distinction on monadic and Kleisli lead me to believe that you were saying only some FP composition is imperative.
For novice readers, the Kleisli arrow is the function, a -> m b. The arrow "->" is the general function. A function is either impure or pure, thus either changes or creates state respectively. Some forms of declarative programming (e.g. HTML, CSS, etc) is stateless, and (if I remember correctly) this is possible because they are not Turing complete.
I want to share my impromptu thoughts about why pure FP might be considered declarative. Although there is a structure in function composition which implies a logical order-of-operations, the purity allows the order-of-execution to deviate without any impact on the final result. Structure with order-of-execution independence is one of the general characteristics of a declarative paradigm; whereas, the nature of an imperative program is that the final state depends on the order-of-execution, because the lack of purity means that function state transitions can not be isolated from each other. Seems to be a slam dunk against Rúnar's confusion of terms. Maybe he realized it too:
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4570
===========================
http://kawagner.blogspot.com/2007/02/why-monads-are-evil.html?showComment=1311410012629#c7474729809428588396
Shelby wrote:
I have refined my understand, which is explained at the following link:
https://goldwetrust.forumotion.com/t112p180-computers#4434
As Peyton-Jones elucidates in "Tackling the Awkward Squad", IO Monad is simply (one of) Haskell's way(s) of mixing impure, imperative code with pure, declarative code. To the extent that you can make your interface to the world declarative, i.e. pure functional (reactive), then you don't use IO Monad. But at least until the world (i.e. the OS) becomes pure functional, then we need a "wrapper" on our pure functional core-- which is IO Monad.
I gained clarity in the summary of my current open source project, which goes into more detail on this issue:
http://Copute.com
Last edited by Shelby on Fri Dec 09, 2011 10:02 pm; edited 11 times in total
Software engineering is a living organism (why we need Copute!)
http://elegantcode.com/2011/06/22/why-software-development-will-never-be-engineering/
The analogy was to why online poker skills accelerated in past 5 years, was due to more interaction and human involvement, i.e. lower the barriers to COOPeration (note I also own COOPute.com). This is exactly what I am trying to do for software evolution by creating Copute, with its economic and technical models.
I replied to a comment:
http://elegantcode.com/2011/06/22/why-software-development-will-never-be-engineering/#comment-234084180
http://esr.ibiblio.org/?p=3345#comment-311334
Exactly why I am creating Copute. The maximum division-of-labor and thus specialization is only possible if code is more reusable and smaller orthogonal modules, so that programmers can specialize and have their efforts reused in different problem domains.
P.S. Daniel's Biblical "travel & knowledge will increase" pops up again, in this online poker (increasing travel virtually) to knowledge progress analogy.
The nature of software development, [...], leads itself to rapid evolution.
Consider what direction software development is evolving. Is it even evolving in the direction of rigid engineering practices or is it evolving in the exact OPPOSITE direction?
Ten years ago, we tried to use UML diagrams and CASE tools to develop software. Ten years ago waterfall was all the rage. Ten years ago, we thought that ten years in the future we would have programs that would allow us to build software in the same way that CAD tools allow for building machine parts.
Not only did it not happen. It went completely the other way. Now we are all talking about Agile. Now people don’t even remember what CASE tools are. Now we are building software without trying to define the whole system in UML diagrams.
The fact of the matter is software systems are unruly beasts!
The analogy was to why online poker skills accelerated in past 5 years, was due to more interaction and human involvement, i.e. lower the barriers to COOPeration (note I also own COOPute.com). This is exactly what I am trying to do for software evolution by creating Copute, with its economic and technical models.
I replied to a comment:
http://elegantcode.com/2011/06/22/why-software-development-will-never-be-engineering/#comment-234084180
The difference between software and the brigde builder is that you can calculate if a bridge is stable and can support the weight and withstand the wind... I never could prove a piece of software to be correct with mathmatics. Just by using it and see if it didn't cause errors.
There are formal semantics for software in some languages, but your point is valid in the sense that the number of variables that have to be fit in many software, are greater and more importantly they are changing more rapidly. So it is not that we don't have equational foundations increasingly used in practical software engineering at least by those on leading edge of software engineering (e.g. category theory, monads, programming paradigms, agile practices, agile enforcing compilers, formal logic, pure lambda calculus and the referentially transparency, behavioral static typing, etc), it is that the dynamic complexity of variables in software is insatiable, because the better our software, then the more variables of demand are created by the new possibilities revealed. Software is unique from other engineering disciplines in that it is applicable to all of them.
Software is the encoding of human knowledge, which is always progressing.
There are some important equational foundations that we need to popularize so that competitive cooperation on software can accelerate.
http://esr.ibiblio.org/?p=3345#comment-311334
There’s something that many commenters have alluded to, but didn’t come out directly and say:
An engineer will specialize and become expert in some field, like bridge-building. The days when an engineer like Brunel could design bridges and ships and… are long gone.
Software developers know how to develop software, but they come to a big project with no knowlege of the problem domain. They have to learn that on-the-fly, gaining experience by making mistakes. Still worse, the people who commissioned the project in the first place don’t know what they really want, and find problems communicating with the software people. This is much harder than telling an engineering firm that you need a bridge from here to there, to carry so much traffic, and they provide a solution.
OTOH, the above is not all bad. Software’s inherent flexibility allows the business to explore solutions that they might otherwise not be aware of (if they have the nerve to keep funding the project). Trying out a suspension bridge, then a cantilever, then a simple truss design by building them all wouldn’t fly in the engineering world; the engineer would be expected to do that on paper and propose the best one.
Exactly why I am creating Copute. The maximum division-of-labor and thus specialization is only possible if code is more reusable and smaller orthogonal modules, so that programmers can specialize and have their efforts reused in different problem domains.
P.S. Daniel's Biblical "travel & knowledge will increase" pops up again, in this online poker (increasing travel virtually) to knowledge progress analogy.
Working on a new Introduction to Copute (also technical rationale)
Copute is a new computer language which aims to correct the mistakes and shortcomings in other programming languages, which have limited the scale at which software can be built from reusable modules.
Fitness
Degrees-of-freedom (DOF) dictates how well a system can adapt to cooperate and fit to a desired solution. For example, there would be gaps (i.e. errors in fitness) between a bicycle chain and the curved shape it wraps around, because the chain can only bend at the links. Employing instead a solid, but flexible metal strip, the metal would remain fit to the curve only with a sustained force-- the resisting force being a reduced DOF and an error in fitness. Permanent bending reduces DOF for wrapping to different curves.
Google can't even find a wise proverb, "if you use your power, then you lose it". The equivalent form is probably popular because it is misunderstood, "absolute power corrupts absolutely". The utility of power exists only because there are resistance forces. This generative fact applies to everything. For example, the power vacuum that gives rise to central authority is due to the vested interests, bickering, selfishness, and complacency (laziness) of the governed.
The proof is given by the well established fundamental physics equations. Power is the rate at which work can be completed. Power requires energy to produce work, but the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter.[1] For example, a compressed spring wants to push back and undo the work.
So if our goal is to get more configurations of in our system with less work, then we need to reduce these resistance forces, i.e. increase the DOF. Think of springs opposing movement in numerous directions, and we want to remove them. Thus, if we succeed to increase DOF, it takes less work to produce our desired diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from our work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what we want and faster, with a greater longevity, is thus more powerful (efficient) and explains the wisdom of the forgotten proverb.
Knowledge
Communication redundance (amplitude) is a form of power, because its utility exists due to the resistance to comprehension, i.e. due to noise mixed with the signal. The signal-to-noise ratio (SNR) depends on the DOF of both the sender and the receiver.
The difference between signal and noise, is the mutual comprehension (i.e. resonance) between the sender and the receiver, i.e. noise can become a signal or vice versa, depending on the coupling. In physics, resonance is the lack of resistance to the change in a particular configuration of the subject matter, i.e. each resonator is a DOF.[2] DOF is the number of potential orthogonal (independent) configurations, i.e. the ability to obtain a configuration without impacting the ability to obtain another configuration. In short, DOF are the configurations that don't have dependencies on each other.
Thus increasing the number of independent configurations in any system, makes the system more powerful, requiring less energy, work, and power, to obtain diversity within the system. The second law of thermodynamics says that the universe is trending to maximum entropy, i.e. the maximum independent configurations. Entropy (disorder) is a measure of the relative number of independent possibilities. So now we see why Coase's theorem holds, that the universe is trending towards the maximum free market, and any cost barrier will fail eventually. This is why small things grow faster, because large things reduce the number of independent configurations and thus require exponentially more power to grow. Thus in the point-of-view of change and the future, small things are exponentially more powerful than large things. The bell curve exists because the minority is exponentially more efficient (more DOF), and because a perfectly equal distribution would require infinite DOF and the end of the universe's trend to maximum disorder. Large things stagnate, rot, collapse, and disappear.
Knowledge is correlated to the DOF, because in every definition of knowledge one can think of, an increase in knowledge is an increase in DOF and vice versa. Software is unique among the engineering disciplines in that it is applicable to all of them. Software is the process of increasing knowledge. Thus one of the most critical characteristics of software is that it does not want to be static, and that the larger the components, thus the fewer the DOF, the less powerful (efficient) the software process.
Copute attempts to apply these concepts to software, by increasing the independence (orthogonality) of software components, so they can be composed into diverse configurations with less work. Independent programmers can leverage independent components with less communication and refactoring work, thus Copute is about increasing cooperation.
Below we will outline the major features of Copute which enable independent software components.
Pure Functional Programming
Pure functional programming (PFP) is fundamentally about not repeating oneself, i.e. the ability to compose (small) functions more freely, because it has following features:
PFP is never imperative, and is always declarative. In general, declarative languages (e.g. HTML, CSS) express the logic of the system, without forcing a particular order-of-execution. Declarative is one of the features required for concurrency. Due to purity, each the state transition for each function in PFP is independent (orthogonal) to all the others and to the order-of-execution, thus do not inhibit DOF as imperative programming does. Some people mistakenly state that PFP is imperative, but these are cases where a particular language compiler fudges (e.g. IO monad in Haskell[3]) or does not check the purity of a construct, to give apparency of purity within the semantics of the type system of the language, but is imperative in the semantics of state external to that type system.
PFP is mutually inclusive to OOP, not mutually exclusive. PFP and OOP are duals in that they each can be used to solve the Expression Problem for maximum DOF.[4] And category theory reveals that PFP and OOP are composable as follows.
Although the DOF benefits derive from math, the composition benefits of PFP are intuitive because every programmer understands how to do composition of functions. The (category theory models of) higher-level object subtypes (OOP), can be composed with ordinary functions-- functions which input type T and return type A-- without having to write specialized versions of these functions for each possible subtype we create. The following OOP models are simply composing as if they are (and thus resonating with) ordinary functions. Read the Copute documentation for each of the following models to aid in understanding the following table.
Object Oriented Programming
Copute builds on several critical Scala innovations, such as higher-kinded generics, generic type parameter variance annotations, multiple inheritance of interface and implementation, and pattern matching with abstract data types (case classes). Copute adds a few critical refinements to the single-point-of-truth (SPOT), which drastically improves the DOF, conciseness, and intuitive understanding.
TODO: discuss single-point-of-truth. Also discuss application of granular purity to OOP.
[1] http://en.wikipedia.org/w/index.php?title=Energy&oldid=435292864
[2] http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Resonators
http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Mechanical_and_acoustic_resonance
[3] https://goldwetrust.forumotion.com/t112p180-computers#4434
[4] http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4537
Fitness
Degrees-of-freedom (DOF) dictates how well a system can adapt to cooperate and fit to a desired solution. For example, there would be gaps (i.e. errors in fitness) between a bicycle chain and the curved shape it wraps around, because the chain can only bend at the links. Employing instead a solid, but flexible metal strip, the metal would remain fit to the curve only with a sustained force-- the resisting force being a reduced DOF and an error in fitness. Permanent bending reduces DOF for wrapping to different curves.
Google can't even find a wise proverb, "if you use your power, then you lose it". The equivalent form is probably popular because it is misunderstood, "absolute power corrupts absolutely". The utility of power exists only because there are resistance forces. This generative fact applies to everything. For example, the power vacuum that gives rise to central authority is due to the vested interests, bickering, selfishness, and complacency (laziness) of the governed.
The proof is given by the well established fundamental physics equations. Power is the rate at which work can be completed. Power requires energy to produce work, but the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter.[1] For example, a compressed spring wants to push back and undo the work.
So if our goal is to get more configurations of in our system with less work, then we need to reduce these resistance forces, i.e. increase the DOF. Think of springs opposing movement in numerous directions, and we want to remove them. Thus, if we succeed to increase DOF, it takes less work to produce our desired diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from our work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what we want and faster, with a greater longevity, is thus more powerful (efficient) and explains the wisdom of the forgotten proverb.
Knowledge
Communication redundance (amplitude) is a form of power, because its utility exists due to the resistance to comprehension, i.e. due to noise mixed with the signal. The signal-to-noise ratio (SNR) depends on the DOF of both the sender and the receiver.
The difference between signal and noise, is the mutual comprehension (i.e. resonance) between the sender and the receiver, i.e. noise can become a signal or vice versa, depending on the coupling. In physics, resonance is the lack of resistance to the change in a particular configuration of the subject matter, i.e. each resonator is a DOF.[2] DOF is the number of potential orthogonal (independent) configurations, i.e. the ability to obtain a configuration without impacting the ability to obtain another configuration. In short, DOF are the configurations that don't have dependencies on each other.
Thus increasing the number of independent configurations in any system, makes the system more powerful, requiring less energy, work, and power, to obtain diversity within the system. The second law of thermodynamics says that the universe is trending to maximum entropy, i.e. the maximum independent configurations. Entropy (disorder) is a measure of the relative number of independent possibilities. So now we see why Coase's theorem holds, that the universe is trending towards the maximum free market, and any cost barrier will fail eventually. This is why small things grow faster, because large things reduce the number of independent configurations and thus require exponentially more power to grow. Thus in the point-of-view of change and the future, small things are exponentially more powerful than large things. The bell curve exists because the minority is exponentially more efficient (more DOF), and because a perfectly equal distribution would require infinite DOF and the end of the universe's trend to maximum disorder. Large things stagnate, rot, collapse, and disappear.
Knowledge is correlated to the DOF, because in every definition of knowledge one can think of, an increase in knowledge is an increase in DOF and vice versa. Software is unique among the engineering disciplines in that it is applicable to all of them. Software is the process of increasing knowledge. Thus one of the most critical characteristics of software is that it does not want to be static, and that the larger the components, thus the fewer the DOF, the less powerful (efficient) the software process.
Copute attempts to apply these concepts to software, by increasing the independence (orthogonality) of software components, so they can be composed into diverse configurations with less work. Independent programmers can leverage independent components with less communication and refactoring work, thus Copute is about increasing cooperation.
Below we will outline the major features of Copute which enable independent software components.
Pure Functional Programming
Pure functional programming (PFP) is fundamentally about not repeating oneself, i.e. the ability to compose (small) functions more freely, because it has following features:
- Pure: for each value for the input, a function can be replaced by a previously cached value of the result, i.e. the function doesn't depend on any external state.
- Higher-order: a function can be a value.
- Curried: a new function can can be created by assigning a constant value to an input of an existing function.
- Declarative: function composition expresses logical order-of-operations, but due to the replacement principle for purity, the order-of-execution is independent.
- Mathematical: lambda calculus is PFP, which allows us to use category theory to maximize DOF.
PFP is never imperative, and is always declarative. In general, declarative languages (e.g. HTML, CSS) express the logic of the system, without forcing a particular order-of-execution. Declarative is one of the features required for concurrency. Due to purity, each the state transition for each function in PFP is independent (orthogonal) to all the others and to the order-of-execution, thus do not inhibit DOF as imperative programming does. Some people mistakenly state that PFP is imperative, but these are cases where a particular language compiler fudges (e.g. IO monad in Haskell[3]) or does not check the purity of a construct, to give apparency of purity within the semantics of the type system of the language, but is imperative in the semantics of state external to that type system.
PFP is mutually inclusive to OOP, not mutually exclusive. PFP and OOP are duals in that they each can be used to solve the Expression Problem for maximum DOF.[4] And category theory reveals that PFP and OOP are composable as follows.
Although the DOF benefits derive from math, the composition benefits of PFP are intuitive because every programmer understands how to do composition of functions. The (category theory models of) higher-level object subtypes (OOP), can be composed with ordinary functions-- functions which input type T and return type A-- without having to write specialized versions of these functions for each possible subtype we create. The following OOP models are simply composing as if they are (and thus resonating with) ordinary functions. Read the Copute documentation for each of the following models to aid in understanding the following table.
Object Oriented Programming
Copute builds on several critical Scala innovations, such as higher-kinded generics, generic type parameter variance annotations, multiple inheritance of interface and implementation, and pattern matching with abstract data types (case classes). Copute adds a few critical refinements to the single-point-of-truth (SPOT), which drastically improves the DOF, conciseness, and intuitive understanding.
- Purity
- Unreferencable implementation
- No virtual re-implementation
- Static methods instead of implicits
- Disobscurity: e.g. one way to write a function instead of 6, no methods as operators, no.
- No exceptions
TODO: discuss single-point-of-truth. Also discuss application of granular purity to OOP.
[1] http://en.wikipedia.org/w/index.php?title=Energy&oldid=435292864
Stored energy is created whenever a particle has been moved through a field it interacts with (requiring a force to do so), but the energy to accomplish this is stored as a new position of the particles in the field—a configuration that must be "held" or fixed by a different type of force (otherwise, the new configuration would resolve itself by the field pushing or pulling the particle back toward its previous position). This type of energy "stored" by force-fields and particles that have been forced into a new physical configuration in the field by doing work on them by another system, is referred to as potential energy. A simple example of potential energy is the work needed to lift an object in a gravity field, up to a support.
[2] http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Resonators
A physical system can have as many resonant frequencies as it has degrees of freedom.
http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Mechanical_and_acoustic_resonance
Mechanical resonance is the tendency of a mechanical system to absorb more energy [i.e. less resistance] when the frequency of its oscillations [i.e. change in configuration] matches the system's natural frequency of vibration [i.e. natural change in configuration] than it does at other frequencies.
[3] https://goldwetrust.forumotion.com/t112p180-computers#4434
[4] http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4535
http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/#comment-4537
Last edited by Shelby on Mon Jul 25, 2011 10:07 am; edited 4 times in total
Efficiency of purely functional programming
http://twitter.com/#!/kmett/status/74517454973444096
http://stackoverflow.com/questions/1990464/efficiency-of-purely-functional-programming
Array access explains the logarithmic cost factor:
http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/1268285#1268285
I'm becoming skeptical about the claim that pure functional is generally log n slower:
https://goldwetrust.forumotion.com/t112p195-computers#4537
Discussion of tradeoffs for lazy programming:
https://goldwetrust.forumotion.com/t112p165-computers#4430
Going immutable in a strict language can cost you a logarithmic factor, but you get free persistence http://bit.ly/6T6lfr
http://stackoverflow.com/questions/1990464/efficiency-of-purely-functional-programming
Array access explains the logarithmic cost factor:
http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/1268285#1268285
I'm becoming skeptical about the claim that pure functional is generally log n slower:
https://goldwetrust.forumotion.com/t112p195-computers#4537
Discussion of tradeoffs for lazy programming:
https://goldwetrust.forumotion.com/t112p165-computers#4430
Last edited by Shelby on Thu Aug 25, 2011 5:58 pm; edited 3 times in total
BitCoin is socialist and not (and no online curency can be) anonymous
http://www.bitcoin.org/
The fundamental problem is that any online currency either requires a centralized trust authority which is subject to govt regulation, or a majority vote of decentralized authorities which is just socialism and the antithesis of what makes gold and silver money (he who holds the gold makes the rules, no third party authority or majority vote). I had come to this conclusion a long time ago in the Precious Metals forum during my analysis of whether an online currency (potentially backed by gold and silver) could survive govt retribution.
BitCoin can be defeated by a hacker who puts a virus on a majority of computers using BitCoin, or a govt legislation that requires every computer sold to run a firmware program that defeats it (even if a minority will remove the firmware illicitly):
http://forum.bitcoin.org/index.php?topic=12435.msg172811#msg172811
Thus the BitCoin algorithm has an attack vector, which encourages govt regulation of our computers.
No anonymity on the internet:
http://en.wikipedia.org/wiki/Bitcoin#Anonymity
============================
I should also be frank with my friends.
Spoken like a good collectivist:
I suppose you didn't realize that you were also outlawing the 2nd law of thermodynamics too. But I am not going to explain that one to you, because you will NEVER get it.
Btw, Bitcoin is vulnerable to attack by a government law which says every computer must contain a chip which the government can use to hijack the computer for purposes of regulating peer-to-peer protocols. This is because for money to gain enough supply to be viable, the protocol has to be fairly well known and static.
The fundamental problem is that any online currency either requires a centralized trust authority which is subject to govt regulation, or a majority vote of decentralized authorities which is just socialism and the antithesis of what makes gold and silver money (he who holds the gold makes the rules, no third party authority or majority vote). I had come to this conclusion a long time ago in the Precious Metals forum during my analysis of whether an online currency (potentially backed by gold and silver) could survive govt retribution.
BitCoin can be defeated by a hacker who puts a virus on a majority of computers using BitCoin, or a govt legislation that requires every computer sold to run a firmware program that defeats it (even if a minority will remove the firmware illicitly):
http://forum.bitcoin.org/index.php?topic=12435.msg172811#msg172811
Thus the BitCoin algorithm has an attack vector, which encourages govt regulation of our computers.
No anonymity on the internet:
http://en.wikipedia.org/wiki/Bitcoin#Anonymity
============================
I should also be frank with my friends.
...at FinancialSense.com. The topic of Bitcoin, helped me find direction for it.
Spoken like a good collectivist:
Government debt should be settled, and outlawed going forward. A balanced budget amendment to the Constitution should be adopted.
I suppose you didn't realize that you were also outlawing the 2nd law of thermodynamics too. But I am not going to explain that one to you, because you will NEVER get it.
Btw, Bitcoin is vulnerable to attack by a government law which says every computer must contain a chip which the government can use to hijack the computer for purposes of regulating peer-to-peer protocols. This is because for money to gain enough supply to be viable, the protocol has to be fairly well known and static.
Last edited by Shelby on Tue Sep 20, 2011 11:15 pm; edited 2 times in total
Your neighborly Google, whose slogan is "Do no evil"
Your neighborly Google, whose slogan is "Do no evil":
http://www.stuff.co.nz/technology/digital-living/5217945/Google-Wi-Fi-snooping-lawsuits-can-proceed
http://www.stuff.co.nz/technology/digital-living/5217945/Google-Wi-Fi-snooping-lawsuits-can-proceed
Computers create heat and no potential energy
Note this applies to the hardware perspective, while in the software dimension, there is potential energy created (which is why software decays over time relative to the changing applications).
http://www.fibertown.com/pdf/uptimeinstituteheatdensitytrends.pdf
So this means that the work being done is not creating potential energy (sustained resistance forces), but rather permanently bending the structures of resistance, thus releasing heat (heat is disordered matter, that is not useful, i.e. not "signal"). See the quotes of what I had previously about DOF. Remember that "work" is noise, and the information transmitted (resonated between sender & receiver) is the "signal". We want to eliminate "work" and increase information.
In a computer, these bending resistance forces are stored due to capacitance, i.e. the ability to store a charge of electrons with an insulator, and the fact that a transistor is a variable resistor (insulator) controlled by a charge applied to it. These resistance forces are reduced by making the transistors smaller, which is what causes Moore's Law (we can double the # of transistors on a silicon die every 18 months).
My previous treatise on physics and information:
https://goldwetrust.forumotion.com/t112p180-computers#4436
http://www.fibertown.com/pdf/uptimeinstituteheatdensitytrends.pdf
Power consumed by computer and communication products is totally converted to heat.
So this means that the work being done is not creating potential energy (sustained resistance forces), but rather permanently bending the structures of resistance, thus releasing heat (heat is disordered matter, that is not useful, i.e. not "signal"). See the quotes of what I had previously about DOF. Remember that "work" is noise, and the information transmitted (resonated between sender & receiver) is the "signal". We want to eliminate "work" and increase information.
In a computer, these bending resistance forces are stored due to capacitance, i.e. the ability to store a charge of electrons with an insulator, and the fact that a transistor is a variable resistor (insulator) controlled by a charge applied to it. These resistance forces are reduced by making the transistors smaller, which is what causes Moore's Law (we can double the # of transistors on a silicon die every 18 months).
My previous treatise on physics and information:
https://goldwetrust.forumotion.com/t112p180-computers#4436
Shelby wrote:[...]
Employing instead a solid, but flexible metal strip, the metal would remain fit to the curve only with a sustained force-- the resisting force being a reduced DOF and an error in fitness. Permanent bending reduces DOF for wrapping to different curve
[...]
The proof is given by the well established fundamental physics equations. Power is the rate at which work can be completed. Power requires energy to produce work, but the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter.[1] For example, a compressed spring wants to push back and undo the work.
So if our goal is to get more configurations of in our system with less work, then we need to reduce these resistance forces, i.e. increase the DOF. Think of springs opposing movement in numerous directions, and we want to remove them. Thus, if we succeed to increase DOF, it takes less work to produce our desired diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from our work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what we want and faster, with a greater longevity, is thus more powerful (efficient) [...]
Versioning, forking, bug fixes, etc... in Copute.com
THIS IS NO LONGER THE MODEL.
Please note that this is a thought experiment and it may not be implemented for Copute. There is ongoing thinking on how to obtain the best free market pricing model. Preferably it should be entirely decentralized.
Some private writings solved most of the issues in the Copute.com economic model. The remaining issue is modifications to existing modules.
Summary of Economic Model
First a brief summary of prior two posts. Copute will be a new open source (free) computer language that fosters reusability of modules of programming (to finer granularity of module size than with other languages). Any one can use Copute for free, even modify it, and there is no requirement to use the Copute.com marketplace when using the Copute language. In addition, Copute.com will offer a non-exclusive (same modules can be offered else where, even offered for free) marketplace for programmers to offer their modules to other developers, so that duplication of programming effort (modules) can be reduced. Any derivative software product created from these Copute.com modules will pay the greater of 1% of gross sales, or 10% of salaries[1] to develop, that derivative software product. Copute.com will distribute these royalties to module owners (after deducting Copute.com's fee, perhaps 1% of the royalties) proportionally based on an automated statistically sampled profiling of the derivative software in use to determine the percent of time the CPU spends in each module.
Copute.com won't have a monopoly, because any other site can offer a similar marketplace for Copute modules (or even modules of any computer language, although the Copute language will be best for reusing modules). This marketplace is to make modules fungible and the importance of this is discussed here. Note even though modules might be offered for free else where, if the derivative software uses even one module from Copute.com that is not free else where, then they will pay the same royalty to Copute.com. Thus the freedom of offering non-exclusivity, does not render Copute.com's business model impotent. In fact, it adds inertia and of course some module owners will not want to offer their modules for free else where (they might offer them else where not for free, which again would not hurt Copute.com's business model, but rather add inertia to it).
Module Improvement Ownership
But who owns the module when someone other than the original owner improves it (bug fix, refinement, etc)? I can't think of a metric of value for such improvements to modules. The best idea I have so far is to have a very flexible versioning scheme, give minimally 1% of "remaining value" for each fix, and give the option for existing owners of a module to award an increase to that minimum. Let me explain.
A module is created and offered by the owner(s) on Copute.com. This is open source, meaning anyone could offer an improvement. So how do we assign the ownership percentage of the improved version of the module? On the first improvement, the original owner has 100% share of ownership, so we minimally and automatically award 1% of that to the submitter of the improved version. The original owner has the option of login to his account on Copute.com and increase that minimum award (only the existing owners will best have the knowledge to evaluate an improvement and decide its value). So then there are two owners, minimally the split is 99% and 1%. Then a second improvement is made, so minimally 1% of 99% is automatically awarded, which is 0.99%. And this 1% is split proportionally so the first owner has 98.02%, second has 0.99%, and third has 0.99%. Either of the two prior owners can optionally award more, giving up some of their percent. Eventually for example, it may reach where original owner has say 50%, and each improvement owner has (minimally) 0.5%. The point is that future improvements slow down the erosion of prior ownership, unless the prior owners choose to award higher value. This is desirable because of the law of diminishing returns (80/20 rule).
Note that projects (e.g. Linux, Firefox, etc) tend to have 1000s of bug fixes and improvements, thus the original creators would be diluted to near 0% on a project-scale granularity if this model was applied, but within a project, the modules typically only require dozens of fixes/improvements, so this model encourages original owners to create smaller (more granular, thus more) reusable modules. So we see the benefit of smaller things and reusability scales economically well; whereas, project-level granularity does not (can not be fungible).
Also note that the above concept is economic ownership, i.e. the percentage of royalties, not control. If a module is released on Copute.com marketplace, the control is relinquished to the terms of the license which allows unlimited modifications and forks but requires them to reside on the Copute.com marketplace, not even Copute.com has any centralized decision control. The original owner of the module might release the original module else where with different licensing terms (Copute.com marketplace is non-exclusive), but that owner can not release the improvements else where and neither can the submitter of the improvement release the improvement else where (unless making the improvement to a identical fork in a different marketplace of the same module). So we see that Copute.com's forks potentially become unique from the forks in other marketplaces, unless all submitters of improvements submit to all marketplaces.
Versioning and Forking
Copute language, and thus the complementary (and orthogonal) marketplace on Copute.com, is all about freedom-of-choice. Thus we don't want to limit in any way. So submitters of improvements should be free to omit selective prior improvements, i.e. to create a new fork in the version trail of the module. I have an idea for a version numbering scheme that accommodates this with explicit enumeration of omissions. A version number will be numbers separated by periods, e.g. "1.0.1", where each period identifies a fork. So the first version of a module is simply "1". If an improvement is made to that module, then the version becomes "2", the next "3", etc.. If an improvement omits improvement "2" and keeps "3", then the version number is "1:3.1". If an improvement omits improvement "3" forward, then the version number is "2.1". If an improvement omits improvement "2" and keeps "3" through "5", then the version number is "1:3-5.1". If an improvement omits improvement "3" and keeps "4" through "5", then the version number is "2:4-5.1", which is equivalent shorthand for "1-2:4-5.1". The ".1" becomes a new fork and then it can also be forked, so version numbers can end up with multiple periods.
Note that when a submitter proposes to create an improvement without forking, this will initially be created as a fork, then it must be accepted by one of the owners of the trunk, in order to become a numbered as a trunk improvement instead of a fork. This is necessary so that owners don't have to create forks to indicate they don't approve of an improvement.
Note there is no automated way to precisely identify that a particular improvement actually includes and omits the improvements it says it does. Thus it is allowed to have two forks of the same of fork, and this is done using a "..n.1", e.g. "1:3..1.1" and "1:3..2.1" would be an alternative forks of "1:3.1".
Note this proposed module version numbering scheme bears almost no correlation of semantics to existing project version numbering schemes that also employ periods.
So which version will people use if there are multiple forks (which tip of which branch of a metaphorical tree)? They will use which ever fork they think best fits their needs. The popularity of use of forks in derivative software, will be a vote on the fitness, i.e. the free market will anneal the answer. The module owner(s) may endorse a particular fork by making improvements in it. This is superior to the current open source model where forking is highly discouraged and the project owners (current languages don't support module fine granularity of reuse) are (hopefully "benevolent") dictators "for life" (Linus Torvalds and others have been referred to with the quoted title).
[1] Salaries are those of the company developing the derivative software product, not any salaries that were paid to develop the modules that are offered on Copute.com.
Please note that this is a thought experiment and it may not be implemented for Copute. There is ongoing thinking on how to obtain the best free market pricing model. Preferably it should be entirely decentralized.
Some private writings solved most of the issues in the Copute.com economic model. The remaining issue is modifications to existing modules.
Summary of Economic Model
First a brief summary of prior two posts. Copute will be a new open source (free) computer language that fosters reusability of modules of programming (to finer granularity of module size than with other languages). Any one can use Copute for free, even modify it, and there is no requirement to use the Copute.com marketplace when using the Copute language. In addition, Copute.com will offer a non-exclusive (same modules can be offered else where, even offered for free) marketplace for programmers to offer their modules to other developers, so that duplication of programming effort (modules) can be reduced. Any derivative software product created from these Copute.com modules will pay the greater of 1% of gross sales, or 10% of salaries[1] to develop, that derivative software product. Copute.com will distribute these royalties to module owners (after deducting Copute.com's fee, perhaps 1% of the royalties) proportionally based on an automated statistically sampled profiling of the derivative software in use to determine the percent of time the CPU spends in each module.
Copute.com won't have a monopoly, because any other site can offer a similar marketplace for Copute modules (or even modules of any computer language, although the Copute language will be best for reusing modules). This marketplace is to make modules fungible and the importance of this is discussed here. Note even though modules might be offered for free else where, if the derivative software uses even one module from Copute.com that is not free else where, then they will pay the same royalty to Copute.com. Thus the freedom of offering non-exclusivity, does not render Copute.com's business model impotent. In fact, it adds inertia and of course some module owners will not want to offer their modules for free else where (they might offer them else where not for free, which again would not hurt Copute.com's business model, but rather add inertia to it).
Module Improvement Ownership
But who owns the module when someone other than the original owner improves it (bug fix, refinement, etc)? I can't think of a metric of value for such improvements to modules. The best idea I have so far is to have a very flexible versioning scheme, give minimally 1% of "remaining value" for each fix, and give the option for existing owners of a module to award an increase to that minimum. Let me explain.
A module is created and offered by the owner(s) on Copute.com. This is open source, meaning anyone could offer an improvement. So how do we assign the ownership percentage of the improved version of the module? On the first improvement, the original owner has 100% share of ownership, so we minimally and automatically award 1% of that to the submitter of the improved version. The original owner has the option of login to his account on Copute.com and increase that minimum award (only the existing owners will best have the knowledge to evaluate an improvement and decide its value). So then there are two owners, minimally the split is 99% and 1%. Then a second improvement is made, so minimally 1% of 99% is automatically awarded, which is 0.99%. And this 1% is split proportionally so the first owner has 98.02%, second has 0.99%, and third has 0.99%. Either of the two prior owners can optionally award more, giving up some of their percent. Eventually for example, it may reach where original owner has say 50%, and each improvement owner has (minimally) 0.5%. The point is that future improvements slow down the erosion of prior ownership, unless the prior owners choose to award higher value. This is desirable because of the law of diminishing returns (80/20 rule).
Note that projects (e.g. Linux, Firefox, etc) tend to have 1000s of bug fixes and improvements, thus the original creators would be diluted to near 0% on a project-scale granularity if this model was applied, but within a project, the modules typically only require dozens of fixes/improvements, so this model encourages original owners to create smaller (more granular, thus more) reusable modules. So we see the benefit of smaller things and reusability scales economically well; whereas, project-level granularity does not (can not be fungible).
Also note that the above concept is economic ownership, i.e. the percentage of royalties, not control. If a module is released on Copute.com marketplace, the control is relinquished to the terms of the license which allows unlimited modifications and forks but requires them to reside on the Copute.com marketplace, not even Copute.com has any centralized decision control. The original owner of the module might release the original module else where with different licensing terms (Copute.com marketplace is non-exclusive), but that owner can not release the improvements else where and neither can the submitter of the improvement release the improvement else where (unless making the improvement to a identical fork in a different marketplace of the same module). So we see that Copute.com's forks potentially become unique from the forks in other marketplaces, unless all submitters of improvements submit to all marketplaces.
Versioning and Forking
Copute language, and thus the complementary (and orthogonal) marketplace on Copute.com, is all about freedom-of-choice. Thus we don't want to limit in any way. So submitters of improvements should be free to omit selective prior improvements, i.e. to create a new fork in the version trail of the module. I have an idea for a version numbering scheme that accommodates this with explicit enumeration of omissions. A version number will be numbers separated by periods, e.g. "1.0.1", where each period identifies a fork. So the first version of a module is simply "1". If an improvement is made to that module, then the version becomes "2", the next "3", etc.. If an improvement omits improvement "2" and keeps "3", then the version number is "1:3.1". If an improvement omits improvement "3" forward, then the version number is "2.1". If an improvement omits improvement "2" and keeps "3" through "5", then the version number is "1:3-5.1". If an improvement omits improvement "3" and keeps "4" through "5", then the version number is "2:4-5.1", which is equivalent shorthand for "1-2:4-5.1". The ".1" becomes a new fork and then it can also be forked, so version numbers can end up with multiple periods.
Note that when a submitter proposes to create an improvement without forking, this will initially be created as a fork, then it must be accepted by one of the owners of the trunk, in order to become a numbered as a trunk improvement instead of a fork. This is necessary so that owners don't have to create forks to indicate they don't approve of an improvement.
Note there is no automated way to precisely identify that a particular improvement actually includes and omits the improvements it says it does. Thus it is allowed to have two forks of the same of fork, and this is done using a "..n.1", e.g. "1:3..1.1" and "1:3..2.1" would be an alternative forks of "1:3.1".
Note this proposed module version numbering scheme bears almost no correlation of semantics to existing project version numbering schemes that also employ periods.
So which version will people use if there are multiple forks (which tip of which branch of a metaphorical tree)? They will use which ever fork they think best fits their needs. The popularity of use of forks in derivative software, will be a vote on the fitness, i.e. the free market will anneal the answer. The module owner(s) may endorse a particular fork by making improvements in it. This is superior to the current open source model where forking is highly discouraged and the project owners (current languages don't support module fine granularity of reuse) are (hopefully "benevolent") dictators "for life" (Linus Torvalds and others have been referred to with the quoted title).
[1] Salaries are those of the company developing the derivative software product, not any salaries that were paid to develop the modules that are offered on Copute.com.
Last edited by Shelby on Wed Mar 27, 2013 12:06 am; edited 2 times in total
Page 7 of 11 • 1, 2, 3 ... 6, 7, 8, 9, 10, 11
Page 7 of 11
Permissions in this forum:
You cannot reply to topics in this forum