Computers:
5 posters
Page 6 of 11
Page 6 of 11 • 1, 2, 3 ... 5, 6, 7 ... 9, 10, 11
Copute does not repeat the "Billion Dollar Mistake"
Copute has an orthogonal Maybe type, not automatically unboxed if not checked.
Other languages default types to nullable, which can cause an unchecked exception on every use of every type.
Other languages default types to nullable, which can cause an unchecked exception on every use of every type.
Was Knuth wrong about coroutines?
Tangentially, Donald Knuth is like an idol to many in computer science. He seems to be a very humble, likable, productive, and super intelligent person.
Regarding his statement about coroutines, "Subroutines are special cases of ... coroutines.". He also wrote the following:
Afaics, does he have it backwards, or is the statement arbitrary? Afaics, coroutines are special cases of subroutines.
Think about it. A subroutine which is referentially transparent, will return the same value for the same inputs.
Thus all a coroutine is doing is restructuring the algorithm such as that each "yield" is a return value from a subroutine. I had made this observation earlier:
http://code.google.com/p/copute/issues/detail?id=24
Let me give an example of a coroutines:
Here it is with subroutines
Regarding his statement about coroutines, "Subroutines are special cases of ... coroutines.". He also wrote the following:
Coroutines are analogous to subroutines, but they are symmetrical with respect to caller and callee: When coroutine A invokes coroutine B, the action of A is temporarily suspended and the action of B resumes where B had most recently left o
Afaics, does he have it backwards, or is the statement arbitrary? Afaics, coroutines are special cases of subroutines.
Think about it. A subroutine which is referentially transparent, will return the same value for the same inputs.
Thus all a coroutine is doing is restructuring the algorithm such as that each "yield" is a return value from a subroutine. I had made this observation earlier:
http://code.google.com/p/copute/issues/detail?id=24
Let me give an example of a coroutines:
- Code:
function TwoDimensionWalk( m, n )
{
for( i = 0; i <= m; ++i )
for( j = 0; j <= n; ++i )
{
// do any thing here
yield to Consume
}
}
function Consume()
{
// do any thing here
yield to TwoDimensionWalk
}
Here it is with subroutines
- Code:
function TwoDimensionWalk( m, n )
{
for( i = 0; i <= m; ++i )
for( j = 0; j <= n; ++i )
{
NextJ()
Consume()
}
}
function NextJ()
{
// do any thing here
}
function Consume()
{
// do any thing here
}
Last edited by Shelby on Sat Feb 19, 2011 12:33 pm; edited 1 time in total
Scala was 22,000 to 48,000 LOC to implement
http://lambda-the-ultimate.org/node/1233#comment-13870
Typical programmer will average around 30 delivered, production read LOC per day.
Considering that I might be above average, and considering I might work 50% longer per day, I am looking at on the order of 7 to 26 months to complete Copute's compiler.
Typical programmer will average around 30 delivered, production read LOC per day.
Considering that I might be above average, and considering I might work 50% longer per day, I am looking at on the order of 7 to 26 months to complete Copute's compiler.
Tax strategy
http://esr.ibiblio.org/?p=2931#comment-296156
>I don’t think a “forgone license fee” counts as a loss under any accounting system.
Yeah, there’s no way the IRS would buy that. MS has been able to get massive write-offs by giving away copies of their products to schools and charities, but I don’t think NOK counts as a charity yet.
Lower middleman cost "App Store" is needed?
I think my business model for Copute might be needed by the market from the perspective of both the developers and the users (customers).
http://esr.ibiblio.org/?p=2931#comment-296323
http://esr.ibiblio.org/?p=2931#comment-296332
http://esr.ibiblio.org/?p=2931#comment-296341
http://esr.ibiblio.org/?p=2931#comment-296345
http://esr.ibiblio.org/?p=2931#comment-296365
Wow! So that means a granular contribution monetization model might be much more realistic way to be involved for programmers! Yeah!!!!
http://esr.ibiblio.org/?p=2931#comment-296323
Jacob Hallén Says:
February 12th, 2011 at 11:12 am
With Symbian the customer is owned by the operator. For an app to work (without tons of tedious warnings) , it needs to have a certificate signed by the operator. In exceptional circumstances you can get your certificate signed by the phone manufacturer, but most of the time this doesn’t happen, because the phone manufacturer wants to stay buddies with the operator and get him to subsidise the manufacturers phones.
As an app developer you are stuck in certification hell. It is tedious and damned expensive.
Enter the IPhone. The customer is owned by Apple. They will lease the customer to you on fairly decent terms. You no longer need to deal with hundreds of operators or manufacturers that see you as a threat to their business. This is a large part of the success of Apple in the market. It beats the pants off the Symbian model, because users can get the apps they want with any operator they care to use and the developers have one target to develop for. However, Apple charges a high price for the convenience.
Google sees this and is also worried about being kept out of the walled garden with their ads. They put Android on the market, making it free and making the apps self certified. This casts the users free from the control of both operators and handset manufacturers. It makes the devlopers happy, because they are no longer under the control of Apple.
http://esr.ibiblio.org/?p=2931#comment-296332
# Some Guy Says:
February 12th, 2011 at 12:27 pm
> Apple charges a high price for the convenience.
I wouldn’t call it a high price at all. 30% is quite typical for what we had to pay to distributors back in the days of software in boxes on retail shelves, and we don’t have to deal with the cost of implementing a payment system, etc.
http://esr.ibiblio.org/?p=2931#comment-296341
# Morgan Greywolf Says:
February 12th, 2011 at 2:01 pmI wouldn’t call it a high price at all. 30% is quite typical for what we had to pay to distributors back in the days of software in boxes on retail shelves, and we don’t have to deal with the cost of implementing a payment system, etc.
But we no longer live in a world of boxed software. These days most of us who do buy software do so online. That means the only middle men that are of any consequence are the credit card processors and the transaction clearinghouses. Why do we need Apple? Oh, I forgot, because they make us.
http://esr.ibiblio.org/?p=2931#comment-296345
# tmoney Says:
February 12th, 2011 at 4:05 pm
>Why do we need Apple? Oh, I forgot, because they make us.
And the examples of independent Android developers making their millions without using Android market place are…
http://esr.ibiblio.org/?p=2931#comment-296365
@Some Guy:
Certainly, there are examples of people getting rich off Apple’s app store, just like there are people getting rich from football, from singing, etc.
A lot of people make the assumption that paid apps are the way to go, because it’s easier to make good money with them than via advertising. That makes sense if the average developer can, in fact, make good money on the average app through Apple’s store.
I haven’t checked out this guy’s assumptions or math or numbers, but he goes into great detail, and concludes that the median paid app in Apple’s store earns $682 for its developer.
Wow! So that means a granular contribution monetization model might be much more realistic way to be involved for programmers! Yeah!!!!
Last edited by Shelby on Sun Feb 13, 2011 6:50 am; edited 1 time in total
Decided to code the Copute compiler in Scala instead of HaXe
Scala is a superset of Copute's planned feature set, except for pure functions:
http://copute.com/dev/docs/Copute/ref/intro.html#Scala
Which means it will drastically simplify the first iteration of the Copute compiler, because I can just do a literal AST translation of the grammar and output Scala code, with no post parsing analysis needed. Meaning I don't have to do any semantic analysis, just do a surjective mapping of the AST, sort of akin to a regex search and replace. This should mean I can have something working within about 3 months, assuming I don't take any time off.
Whereas, the HaXe compiler can't do numerous things, so I would have to do considerable semantic analysis to output HaXe code:
http://copute.com/dev/docs/Copute/ref/intro.html#HaXe
Also if I write the initial compiler code in Scala, I will have all the planned features of Copute (except pure functions) at my disposal, and then translating that code back to Copute code later, will also be a simple regex search and replace. Coding in HaXe, although benefits from a more familiar C-like syntax, will be burdened by the critical features missing in HaXe.
Also HaXe has no debugger, and Scala has at least three integrated Java development environments with debuggers:
http://www.scala-lang.org/node/91#ide_plugins
Not having a debugger makes working in HaXe impractical.
Also if Copute outputs Scala code as one of its output targets, then it means Copute also has a debugger. Although it won't be debugging in native Copute, at least it will be in the surjective Scala code.
And Java is the main language of Google Android (and smartphones just passed PCs in global unit sales in Q4 2010!), and so Scala runs on any platform that runs Java. So then Copute would too! Imagine Copute could become the most popular way to code Andriod applications.
So then what is the point of creating Copute, if Scala has everything except pure functions?
Several reasons:
http://copute.com/dev/docs/Copute/ref/intro.html#Scala
Which means it will drastically simplify the first iteration of the Copute compiler, because I can just do a literal AST translation of the grammar and output Scala code, with no post parsing analysis needed. Meaning I don't have to do any semantic analysis, just do a surjective mapping of the AST, sort of akin to a regex search and replace. This should mean I can have something working within about 3 months, assuming I don't take any time off.
Whereas, the HaXe compiler can't do numerous things, so I would have to do considerable semantic analysis to output HaXe code:
http://copute.com/dev/docs/Copute/ref/intro.html#HaXe
Also if I write the initial compiler code in Scala, I will have all the planned features of Copute (except pure functions) at my disposal, and then translating that code back to Copute code later, will also be a simple regex search and replace. Coding in HaXe, although benefits from a more familiar C-like syntax, will be burdened by the critical features missing in HaXe.
Also HaXe has no debugger, and Scala has at least three integrated Java development environments with debuggers:
http://www.scala-lang.org/node/91#ide_plugins
Not having a debugger makes working in HaXe impractical.
Also if Copute outputs Scala code as one of its output targets, then it means Copute also has a debugger. Although it won't be debugging in native Copute, at least it will be in the surjective Scala code.
And Java is the main language of Google Android (and smartphones just passed PCs in global unit sales in Q4 2010!), and so Scala runs on any platform that runs Java. So then Copute would too! Imagine Copute could become the most popular way to code Andriod applications.
So then what is the point of creating Copute, if Scala has everything except pure functions?
Several reasons:
- Copute more optimized type erasure.
- Pure functions (from Haskell) are the critical feature needed for widescale contribution composability & reuse, as well as concurrency for more CPU cores coming.
- Most programmers can not wrap their mind around Scala (and the better ones need months to grasp it). I was able to grasp Scala in a fews days, but I am apparently far above average and have a lot of knowledge of Haskell, HaXe, etc.. So Copute's syntax will be more familiar (C-like) and it will remove some of the unnecessary generality that makes Scala obtuse, yet retain all the necessary power of expression. So as one market (of several I forsee), I am viewing Copute as a faster way to write Scala code!
- Scala has the ability to throw and catch an exception. I am tentatively trying to eliminate even throwing an exception in Copute, except in a dynamic function.
- Dynamic coding is extremely popular and it will require massive boilerplate in Scala. Dynamic coding in Copute will be the same (thus concise) as the most popular language in world JavaScript.
- Scala can mix implementation in an interface (called "trait" in Scala), which causes semantic errors. I have disallowed this in Copute's inheritance model.
- Scala allows structural subtyping, which I have disallowed in Copute, because it can cause semantic errors.
- Scala allows "override" runtime virtual inheritance, which can cause semantic errors.
- Scala allows abstract and higher-kind types (as well as other things such as hiding closures in Unit type arguments, etc), that make reading Scala code like trying to read treasure map. Copute code is more transparent and straightforward, without losing necessary capabilities.
- The intent is for Copute to target more languages, e.g. JavaScript, PHP, Python, etc.. Copute will be able to do this more easily because it doesn't have those unnecessary, academic super generality types that Scala has. Scala appears to be tied to the Java (and maybe Microsoft .Net) platforms for the time being.
- Copute has an EBNF defined LL(3) context-free grammar (CFG) already completed, so easier for others to write Copute tools. I am not sure if Scala has a BNF CFG specification (I need to research this).
- K.I.S.S.
- If Copute's compiler code will be simpler than Scala's (which it should be, since less unnecessary generality), then I should be able to innovate faster than Scala with more contributors, as they will be able to more readily understand the code.
- there are probably more reasons....
Last edited by Shelby on Sat Aug 06, 2011 1:17 pm; edited 4 times in total
Type erasure versus reified generics
Here follows the Wikipedia comparison for Java's type erasure versus Microsoft C#'s reification, which I have expanded to include Scala's improvements over Java, and Copute's proposed method to obtain full reification via various planned output targets. The rows from the Wikipedia table which seemed unimportant, appear as empty rows below.
Btw, I find this summary of differences between Java and Scala, very useful.
So far, it seems to me that type erasure is more efficient than reification, because it does not require unnecessary multiple runtime versions of generic code, and that the problems with type erasure are because the JVM (Java Virtual Machine) has poorly understood "use-site" variance declaration (versus Copute and Scala's "definitions-site" site variance declaration), and JVM erases the order and number of type parameters and the bounds of the type parameters, so there is much unnecessary casting (i.e. reflection) and other problems.
Btw, I find this summary of differences between Java and Scala, very useful.
So far, it seems to me that type erasure is more efficient than reification, because it does not require unnecessary multiple runtime versions of generic code, and that the problems with type erasure are because the JVM (Java Virtual Machine) has poorly understood "use-site" variance declaration (versus Copute and Scala's "definitions-site" site variance declaration), and JVM erases the order and number of type parameters and the bounds of the type parameters, so there is much unnecessary casting (i.e. reflection) and other problems.
Java | Microsoft C# | Scala | Copute |
Type checks and downcasts are (implicitly) injected into client code (the code referencing the generics). Compared to non-generic code with manual casts, these casts will be the same[11]. But compared to compile-time verified code which would not need runtime casts and checks, these operations represent a performance overhead. Note such casts are not injected when the type parameter has a bound (the type that is erased to) which is the expected type. | C#/.NET generics guarantees type-safety and is verified at compile time and no extra checks/casts are necessary at runtime. Hence, generic code will run faster than non-generic code (and type-erased code) which require casts when handling non-generic or type-erased objects. | Scala implicitly injected these casts whether it runs on virtual machine is generics ignorant (i.e. requires type erasure), e.g. Java Virtual Machine (JVM), or on a virtual machine which has reification, e.g. Microsoft .Net | Copute is a chameleon and these casts will only be implicitly injected when type erasure is required by the output target. And note that dynamically typed language output targets such as JavaScript and Python, do not need casts (although they suffer speed because they do hashtable lookup for each object member access). |
Cannot use primitive types as type parameters; instead the developer must use the wrapper type corresponding to the primitive type. This incurs extra performance overhead by requiring boxing and unboxing conversions as well a memory and garbage collection pressure because the wrappers will be heap allocated as opposed to stack allocated. | Primitive and value types are allowed as type parameters in generic realizations. At runtime code will be synthesized and compiled for each unique combination of type parameters upon first use. Generics which are realized with primitive/value type do not require boxing/unboxing conversions. | Even though all primitive types are classes in Scala, these are implemented as primitive types where possible in the JVM["Step 8"]. Thus, due to type erasure Scala suffers the same boxing performance penalty as Java when a primitive type is used as a type parameter. However, there exist libraries and experimental compiler features to help with optimization but they introduce boilerplate tsuris. | Copute is a chameleon and will at least achieve at least the performance of its output target. Additionally, Copute may apply implicit optimizations (that require no boilerplate) to improve performance on output targets that otherwise employ boxing. |
Generic exceptions are not allowed[12] and a type parameter cannot be used in a catch clause[13]. | Can both define generic exceptions and use those in catch clauses. | ? | Copute is discouraging, or hopefully eliminating, user code exceptions. Exceptions may be generated implicitly by primitive types, e.g. integer overflow. |
Static members are shared across all generic realizations[14] (during type erasure all realizations are folded into a single class). Note this is only a problem for mutable static members, e.g. those not final or a method. Note that it is not allowed to make a static member which is not a function (nor method) and has the type of, or that is parametrized by, a class type parameter, because these would present a type mismatch between two generic realizations. | Static members are separate for each generic realization. A generic realization is a unique class. | Scala's singleton object is shared across all type parameter realizations of the companion class, and it can not be type parametrized, but each method of a singleton object can be orthogonally parametrized (see next row of this table), which is essentially what is needed. Thus this is only a problem for mutable members of object, e.g. those that are var (not val or def), because they can not be type parametrized. | Although it is difficult to think of a case where there is benefit when static members are separate for each type parameter realization (why is a list of string differentiated from a list of integer for a static?), Copute could achieve this on output targets which use type erasure by creating a unique singleton object for each realization, but only necessary for those that have mutable static members. It is not yet decided if there are use cases that justify this, since afaics it is only necessary for a static member which is not a function (nor method) and has the type of, or that is parametrized by, a type parameter, because these would otherwise present a type mismatch between two generic realizations, but mutable statics are bad design. |
Type parameters cannot be used in declarations of static fields/methods or in definitions of static inner classes. | No restrictions on use of type parameters. | Each function (or method) in the singleton object can declare type parameters which are orthogonal to, or duplicates of, any type parameters of the companion class. Any companion class type parameters can be accessed, by inputting an instance of the companion class to the static function (or method), then calling a method in the companion class-- because there is no use for the concrete companion class type parameters without an instance of the companion class. As stated in prior row of this table, mutable static non-functions (a/k/a 'fields') can not be type parametrized, and they can't be allowed because they cause type mismatch between generic realizations (and are undesirable bad design any way). Inner classes are correctly orthogonally parametrized and the type parameters can be specified on each generic realization. | Ditto as per Scala, except this is done with 'static' keyword instead of singleton 'object' declaration. It would also be possible to simulate type parametrized mutable static non-functions (a/k/a 'fields'), per the prior row of this table, but probably undesirable. |
Cannot create an array where the component type is a generic realization (concrete parameterized type).
| A generic realization is a 1st class citizen and can be used as any other class; also an array component.
| "Scala does not know such a restriction, although it uses erasure."
| Copute does not have such restriction. |
Cannot create an array where the component type is a type parameter.
| Type parameters represent actual, discrete classes and can be used like any other type within the generic definition.
| "Scala does not know such a restriction, although it uses erasure."
| Copute does not have such restriction. |
There is no class literal for a concrete realization of a generic type. Thus instanceof and cast on a parametrized type are impossible.
| A generic realization is an actual class. | There is no class literal for a concrete realization of a type parameter. Thus isInstanceOf, asInstanceOf, and pattern matching a parametrized type are impossible.
| It is difficult to think of a case where there is a benefit to isInstanceOf, asInstanceOf, or pattern matching on a parametrized type, because it would be impossible to know a priori all the possible class literals to expect and this would not be checked at compile-time. An equivalent functionality can be checked at compiled-time by employing function overloading, which is compatible with output targets that do type erasure. Although isInstanceOf and pattern matching could be provided by storing reflection data in each instance, or otherwise providing reflection, it is the implementation of asInstanceOf which would be more complex as it would require casts on every use of the erased type parameter, e.g. T as String erased to Object. |
instanceof is not allowed with type parameters or concrete generic realizations. | The is and as operators work the same for type parameters as for any other type. | See prior row of this table. | See prior row of this table. |
Cannot create new instances using a type parameter as the type. | With a constructor constraint, generic methods or methods of generic classes can create instances of classes which have default constructors. | Cannot create new instances using a type parameter as the type. | It is poor design to instantiate explicit class literals (i.e. use new), so proper design will have all types implement a static type parametrized factory interface, then any type parameter which is upper bound on that interface, can be instantiated by calling the interface's factory method. This boilerplate could be automated in Copute for every concrete class that has an implicit (i.e. no explicit) or constructor without formal parameters. The concrete class could override factory method to provide custom behavior. Such a solution works with output targets that do type erasure. |
The unbounded type(s) of the parametrized type are erased during compilation. Special extensions to reflection must be used to discover the original type. | Type information about C# generic types is fully preserved at runtime, and allows complete reflection support as well as instantiation of generic types. | The unbounded type(s) of the parametrized type are erased during compilation. | It is difficult to think of a case of benefit from reflection for reading the unbounded type(s) of a parametrized type. It could be done if we can think of a justified use case. |
Last edited by Shelby on Wed Feb 16, 2011 11:23 am; edited 28 times in total
Proving the Halting Problem, by Dr. Suess
The following is relevant to Copute, but it is also relevant to everything, including all science, religion, philosophy, and any search for truth and knowledge.
Recursion is a key feature of a Turing complete machine (i.e. a computer).
Russell's Paradox: there is no rule for a set that does not cause it to
contain itself, thus all sets are infinitely recursive.
The is also a feature of Russell's Paradox, which says that any set also contains itself (that nothing can be non-recursive, any such false barrier, will be brittle and fail due to Coase's Theorem).
And follows all the other theorems which all derive and relative to the 2nd law of thermodynamics which says the universe is always trending to maximum disorder (i.e. maximum possibilities):
* Liskov Substition Principle: it is an undecidable problem that subsets
inherit.
* Linsky Referencing: it is undecidable what something is when it is
described or perceived.
* Coase Theorem: there is no external reference point, any such barrier
will fail.
* Godel's Theorem: any formal theory, in which all arithmetic truths can
be proved, is inconsistent.
* 1856 Thermo Law: entire universe (a closed system, i.e. everything)
trends to maximum disorder.
http://ebiquity.umbc.edu/blogger/2008/01/19/how-dr-suess-would-prove-the-halting-problem-undecidable/
Scooping the Loop Snooper
an elementary proof of the undecidability of the halting problem
Geoffrey K. Pullum, University of Edinburgh
No program can say what another will do.
Now, I won’t just assert that, I’ll prove it to you:
I will prove that although you might work til you drop,
you can’t predict whether a program will stop.
Imagine we have a procedure called P
that will snoop in the source code of programs to see
there aren’t infinite loops that go round and around;
and P prints the word “Fine!” if no looping is found.
You feed in your code, and the input it needs,
and then P takes them both and it studies and reads
and computes whether things will all end as they should
(as opposed to going loopy the way that they could).
Well, the truth is that P cannot possibly be,
because if you wrote it and gave it to me,
I could use it to set up a logical bind
that would shatter your reason and scramble your mind.
Here’s the trick I would use – and it’s simple to do.
I’d define a procedure – we’ll name the thing Q -
that would take any program and call P (of course!)
to tell if it looped, by reading the source;
And if so, Q would simply print “Loop!” and then stop;
but if no, Q would go right back to the top,
and start off again, looping endlessly back,
til the universe dies and is frozen and black.
And this program called Q wouldn’t stay on the shelf;
I would run it, and (fiendishly) feed it itself.
What behaviour results when I do this with Q?
When it reads its own source, just what will it do?
If P warns of loops, Q will print “Loop!” and quit;
yet P is supposed to speak truly of it.
So if Q’s going to quit, then P should say, “Fine!” -
which will make Q go back to its very first line!
No matter what P would have done, Q will scoop it:
Q uses P’s output to make P look stupid.
If P gets things right then it lies in its tooth;
and if it speaks falsely, it’s telling the truth!
I’ve created a paradox, neat as can be -
and simply by using your putative P.
When you assumed P you stepped into a snare;
Your assumptions have led you right into my lair.
So, how to escape from this logical mess?
I don’t have to tell you; I’m sure you can guess.
By reductio, there cannot possibly be
a procedure that acts like the mythical P.
You can never discover mechanical means
for predicting the acts of computing machines.
It’s something that cannot be done. So we users
must find our own bugs; our computers are losers!
================
Let me make the proof more simple for you. If I write a function that returns either "Stops" or "Does not stop" when it analyzes any other function. Now I have my function call that function, so that function analyzes my function and returns "Does not stop", because there is an infinite loop.
The point is that any function can be called. The set of functions always contains itself.
Recursion is a key feature of a Turing complete machine (i.e. a computer).
Russell's Paradox: there is no rule for a set that does not cause it to
contain itself, thus all sets are infinitely recursive.
The is also a feature of Russell's Paradox, which says that any set also contains itself (that nothing can be non-recursive, any such false barrier, will be brittle and fail due to Coase's Theorem).
And follows all the other theorems which all derive and relative to the 2nd law of thermodynamics which says the universe is always trending to maximum disorder (i.e. maximum possibilities):
* Liskov Substition Principle: it is an undecidable problem that subsets
inherit.
* Linsky Referencing: it is undecidable what something is when it is
described or perceived.
* Coase Theorem: there is no external reference point, any such barrier
will fail.
* Godel's Theorem: any formal theory, in which all arithmetic truths can
be proved, is inconsistent.
* 1856 Thermo Law: entire universe (a closed system, i.e. everything)
trends to maximum disorder.
http://ebiquity.umbc.edu/blogger/2008/01/19/how-dr-suess-would-prove-the-halting-problem-undecidable/
Scooping the Loop Snooper
an elementary proof of the undecidability of the halting problem
Geoffrey K. Pullum, University of Edinburgh
No program can say what another will do.
Now, I won’t just assert that, I’ll prove it to you:
I will prove that although you might work til you drop,
you can’t predict whether a program will stop.
Imagine we have a procedure called P
that will snoop in the source code of programs to see
there aren’t infinite loops that go round and around;
and P prints the word “Fine!” if no looping is found.
You feed in your code, and the input it needs,
and then P takes them both and it studies and reads
and computes whether things will all end as they should
(as opposed to going loopy the way that they could).
Well, the truth is that P cannot possibly be,
because if you wrote it and gave it to me,
I could use it to set up a logical bind
that would shatter your reason and scramble your mind.
Here’s the trick I would use – and it’s simple to do.
I’d define a procedure – we’ll name the thing Q -
that would take any program and call P (of course!)
to tell if it looped, by reading the source;
And if so, Q would simply print “Loop!” and then stop;
but if no, Q would go right back to the top,
and start off again, looping endlessly back,
til the universe dies and is frozen and black.
And this program called Q wouldn’t stay on the shelf;
I would run it, and (fiendishly) feed it itself.
What behaviour results when I do this with Q?
When it reads its own source, just what will it do?
If P warns of loops, Q will print “Loop!” and quit;
yet P is supposed to speak truly of it.
So if Q’s going to quit, then P should say, “Fine!” -
which will make Q go back to its very first line!
No matter what P would have done, Q will scoop it:
Q uses P’s output to make P look stupid.
If P gets things right then it lies in its tooth;
and if it speaks falsely, it’s telling the truth!
I’ve created a paradox, neat as can be -
and simply by using your putative P.
When you assumed P you stepped into a snare;
Your assumptions have led you right into my lair.
So, how to escape from this logical mess?
I don’t have to tell you; I’m sure you can guess.
By reductio, there cannot possibly be
a procedure that acts like the mythical P.
You can never discover mechanical means
for predicting the acts of computing machines.
It’s something that cannot be done. So we users
must find our own bugs; our computers are losers!
================
Let me make the proof more simple for you. If I write a function that returns either "Stops" or "Does not stop" when it analyzes any other function. Now I have my function call that function, so that function analyzes my function and returns "Does not stop", because there is an infinite loop.
The point is that any function can be called. The set of functions always contains itself.
Mixins are just single-inheritance folding
I wrote some public comments today
http://www.codecommit.com/blog/scala/scala-for-java-refugees-part-5/comment-page-1#comment-5284
http://www.codecommit.com/blog/scala/scala-for-java-refugees-part-5/comment-page-1#comment-5284
#
@Cedric
You missed the point of mixins. They completely solve the diamond problem because only duplicate methods which are from the EXACT SAME trait are resolved automatically. Any duplicates from different traits have to be resolved by the superclass in the single-inheritance chain (i.e. analogous to “Class::” in C++).
You can’t do “Class::” specific method call from a trait (you must use the less specific “super.” instead), this because mixin order can insert an mixin in middle of the inheritance hierarchy of another mixin. For example, given mixin A (B) and mixin C (B), then mixin D (C,A) would effectively cause mixin C (A,B), because the B in mixin C (B) is already in the inheritance tree of mixin A (B), and thus not repeated. A more detailed explanation can be found at top, left of page 7 in the Super subsection of the 2.2 Modular Mixin Composition section of Scalable Component Abstractions, Odersky & Zenger, Proceedings of OOPSLA 2005, San Diego, October 2005.
http://www.scala-lang.org/sites/default/files/odersky/ScalableComponent.pdf
So I want you to realized that “Class::” specific method invocation can never be as flexible as mixins.
I studied this deeply for the design of my Copute language.
Shelby Moore III Sunday, February 13, 2011 at 2:12 am
#
typo: I meant “…have to resolved by the subclass…”
Shelby Moore III Sunday, February 13, 2011 at 2:14 am
#
@Cedric
Don’t conflate some programmer’s insufficient separation-of-concerns in the design of a trait with the Diamond Problem.The Diamond Problem is a specific problem of how to resolve multiple inheritance name spaces (scope).
Also please realize that Scala is a single-inheritance language, with the apparency of multiple inheritance of traits. In reality, traits get folded into that single-inheritance chain in a specific order and are NOT multiply inherited.
Shelby Moore III Sunday, February 13, 2011 at 2:24 am
Constructors are considered harmful
http://gbracha.blogspot.com/2007/06/constructors-considered-harmful.html
Shelby added a comment:
I have refined the solution since I wrote the above. Above I was proposing that static factories in an interface could have their return type parametrized to the implemented subtype class. That does not solve the problem. Static factories in an interface are necessary for other SPOT (single-point-of-truth) and boilerplate elimination reasons, e.g. see my implementation of 'wrap' (a/k/a 'unit') in an IMonad (IApplicative), and notice how much more elegant it is than the equivalent Scalaz. Note SPOT is also a critical requirement for maximizing modularity, i.e. ease of composition and reuse.
Rather to accomplish abstraction of constructors, we need is to nudge programmers to input factory functions, so that any code can be abstracted over another subtype of an 'interface' (i.e. instead of calling a 'class' constuctor directly, then input a factory function which returns the result of calling a constructor, thus the caller can change the subtype being constructed). So the important point is that we want to force programmers to create an 'interface'(s) for all their 'class' methods, which is accomplished by not allowing method implementation (i.e. 'class' nor 'mixin') to be referenced any where except in the 'inherits' declaration and constructor call. This means the type of an instance reference can not contain an identifier which is a 'class' nor 'mixin', thus forcing the type of all instance references to contain identifiers which are an 'interface', i.e. instance references reveal the abstract interface, but do not indicate the implementation.
So Copute will have a crucial difference from Scala and the other contenders (e.g. Ceylon), in that 'interface' and 'mixin' will be separated (not conflated in a 'trait'), and only 'interface' can appear in the type of instance references. Note that in Copute (just like for a 'trait' in Scala) 'mixin' and 'interface' may not have a constructor. Scala's linearised form of multiple inheritance is retained.
Note this unique feature of Copute (along with the elimination of virtual methods, a/k/a 'override') is also necessary to enforce the Liskov Substitution Principle (which relates to the concepts of covariance and contravariance):
(note some of the following specification is out-of-date with current specification in grammar and in my various notes around)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
http://copute.com/dev/docs/Copute/ref/class.html#Static_Duck_Typing
http://copute.com/dev/docs/Copute/ref/function.html#Overloading
Here are some more relevant thoughts from Gilad Bracha:
http://gbracha.blogspot.com/2008/02/cutting-out-static.html?showComment=1221878100000#c3542743084882598768
Note, Gilad Bracha helped write the Java specification.
See also this:
http://en.wikipedia.org/wiki/Hollywood_principle
Shelby added a comment:
Gilad is making the correct point that in best design practice composable modules (i.e. APIs) should expose abstract interfaces but not their concrete classes. Thus new becomes impossible, because there is no concrete class to instantiate. Apparently Gosling realized this.
Static factories in abstract interfaces can accomplish this with type parametrization.Gilad Bracha wrote:The standard recommended solution is to use a static factory [...] You can’t abstract over static methods
Static methods can be abstracted (over inheritance) with type parametrization, e.g.
- Code:
interface Factory<+Subclass extends Factory<Subclass>>
{
newInstance() : Subclass
}
where the + declares that Factory can be referenced (assigned) covariantly due to Liskov Substitution Principle. The + is unnecessary in the above example, but exists in the syntax generally to perform checking against LSP.
Thus any API can abstract over any publicly exposed Factory<T> by associating them privately with instances of Factory<IheritsFromT>. In other words, our interface could have been as follows, where the factory creates a new instance of itself, which can be any subtype because inherited return types may be covariant.
- Code:
interface Factory
{
newInstance() : Factory
}
On the topic of concrete implementation inheritance and constructors, the Scala-like mixins can not handle this, because external parameters for constructors are not allowed in a trait implementation. Thus each mixin is detached from the external inheritance order. I have taken this a step further in my design for Copute, because I force the separation of purely abstract interface and purely concrete mixin implementation, thus every mixin has to inherit from an abstract interface and can only be referenced as a type via that interface, i.e. concrete mixins are not types in any scope other than the mixin declaration.
I have refined the solution since I wrote the above. Above I was proposing that static factories in an interface could have their return type parametrized to the implemented subtype class. That does not solve the problem. Static factories in an interface are necessary for other SPOT (single-point-of-truth) and boilerplate elimination reasons, e.g. see my implementation of 'wrap' (a/k/a 'unit') in an IMonad (IApplicative), and notice how much more elegant it is than the equivalent Scalaz. Note SPOT is also a critical requirement for maximizing modularity, i.e. ease of composition and reuse.
Rather to accomplish abstraction of constructors, we need is to nudge programmers to input factory functions, so that any code can be abstracted over another subtype of an 'interface' (i.e. instead of calling a 'class' constuctor directly, then input a factory function which returns the result of calling a constructor, thus the caller can change the subtype being constructed). So the important point is that we want to force programmers to create an 'interface'(s) for all their 'class' methods, which is accomplished by not allowing method implementation (i.e. 'class' nor 'mixin') to be referenced any where except in the 'inherits' declaration and constructor call. This means the type of an instance reference can not contain an identifier which is a 'class' nor 'mixin', thus forcing the type of all instance references to contain identifiers which are an 'interface', i.e. instance references reveal the abstract interface, but do not indicate the implementation.
So Copute will have a crucial difference from Scala and the other contenders (e.g. Ceylon), in that 'interface' and 'mixin' will be separated (not conflated in a 'trait'), and only 'interface' can appear in the type of instance references. Note that in Copute (just like for a 'trait' in Scala) 'mixin' and 'interface' may not have a constructor. Scala's linearised form of multiple inheritance is retained.
Note this unique feature of Copute (along with the elimination of virtual methods, a/k/a 'override') is also necessary to enforce the Liskov Substitution Principle (which relates to the concepts of covariance and contravariance):
(note some of the following specification is out-of-date with current specification in grammar and in my various notes around)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
http://copute.com/dev/docs/Copute/ref/class.html#Static_Duck_Typing
http://copute.com/dev/docs/Copute/ref/function.html#Overloading
Here are some more relevant thoughts from Gilad Bracha:
http://gbracha.blogspot.com/2008/02/cutting-out-static.html?showComment=1221878100000#c3542743084882598768
A singleton is a simply unique object. In most languages, you can use the static state associated with a class to ensure it only has one instance, and make singletons that way. But this only works because the class itself is a singleton, and the system takes care of that for you by having a global namespace.
In Newspeak, there is no global namespace. If you need singletons in your application, they are simply instance variables of your module. When you load your application, you hook up your modules and make a single copy of each.
If, on the other hand, you need a service that's accessible to an open-ended set of users, it has to be available at some public place - this could be a URL on the internet (the real global state) or a platform wide registry. In other words, it's part of the outside world's state.
Such world state may be injected into your application when it starts up (but only in as much as the platform trusts you to access it).
Not sure if this helps. The habit of static state is pervasive in computing and it's hard for people to get rid of it - but we will.
Note, Gilad Bracha helped write the Java specification.
See also this:
http://en.wikipedia.org/wiki/Hollywood_principle
Last edited by Shelby on Tue Jun 21, 2011 8:21 am; edited 9 times in total
re: The Smartphone Wars: Nokia shareholders revolt!
http://esr.ibiblio.org/?p=2961&cpage=1#comment-296658
Shelby wrote (as "Imbecile" but not the same "Imbecile" who posted other comments):
Shelby wrote (as "Imbecile" but not the same "Imbecile" who posted other comments):
Nokia must focus on its inherent strengths, which is as a refinement innovator, not a paradigm innovator a la Steve Jobs. Microsoft will slow them down, because Microsoft is strong in neither, and Windows Phone is too far behind in a race of exponential rate of innovation. Due to the exponential function, it is too late for anyone to go back and start a new smartphone OS from scratch or even take time to complete an unfinished one, unless it will offer massive compelling advantages, which is probably unrealistic. The realistic forward innovations are precisely in Nokia's area of strength. By the end of 2011, the smartphone marketshare of Nokia will have eroded to teens and Android will be triple that.
Nokia should innovate on Android so they can ship a #1 selling smartphone in 2011, and incrementally differentiate itself from the herd. It is potentially possible to co-opt Google with strategic innovations that diverge from the herd's common base. The Android platform is inherently fractured, as this is the desirable nature of open source. The opportunity is wide-open for Nokia to provide an unfractured Android platform. Popular innovations will eventually make their way back into the common base, but always on a lag-- look to Apple as a model or profitability as first-innovator.
There is no credible AppStore or iTunes on Android on the horizon. The opportunity to take the best of Android, win the race to market, and innovate are wide open. Do not fight against the exponential function. Embrace the strengths of open source, and your own strengths with respect to it-- this advice applies to everyone. Dinosaur's stand in the way of open source. Re-inventing Android as MeeGo at this stage is an enormous waste of capital, and the free market does not reward those who do not focus capital on their relative strengths. MeeGo is yet another coffin in the European culture cementary of "politics 90% of the time, to get 10% production". The institutional investors are correct that "American" (libertarian) culture of "Just Do It" wins, but the Elop and Microsoft selection are not even shadows of that.
Last edited by Shelby on Sat Mar 05, 2011 6:05 pm; edited 1 time in total
Why I should work on Copute, even if I never earn a penny
Net worth is overrated.
Accepting what is.
Note this was recorded without any forethought, just stream of thought while I am in deep in programming just a few moments ago...
http://coolpage.com/accepting.mp3
The recording is my biblical insight into contentment.
I knew I was destined to be poor, even since I learned to love beans & rice. Wealth is not an indicator of success. It is better to have tried and failed, than to have wasted a life on "the highest ROI" as measured in gold & silver or any other metric of money.
Accepting what is.
Note this was recorded without any forethought, just stream of thought while I am in deep in programming just a few moments ago...
http://coolpage.com/accepting.mp3
The recording is my biblical insight into contentment.
I knew I was destined to be poor, even since I learned to love beans & rice. Wealth is not an indicator of success. It is better to have tried and failed, than to have wasted a life on "the highest ROI" as measured in gold & silver or any other metric of money.
Tension between CR and CTR for advertising!
Aha!
I remember from my high ($5000+ per month) PPC advertising spending back in 2000-2005:
http://www.ppcsummit.com/newsletter/pay-per-click/ad-copy-isnt-just-text/
The problem is you can't always get them both to optimize together.
This is why paid advertising is not always optimum model for maximizing knowledge and prosperity. The CR (coversion ratio of visitors to sales) is what matters most for maximizing knowledge and prosperity. The CTR (the click through ratio of clicks to views of ads) is what Google needs to maximize their revenue.
I have realized that the way to maximize CR is if users could compete to suggest sites they like in the context of other sites, with small writeups. Sort of like blogging on another webpage, e.g. I could blog on Hommel's site or cnn.com, etc. The visitor would decide if they want to view these suggestions. They would be ranked by CTR, but realize that then the CR would always be maximized because visitors wouldn't cost anything (no Google and source site charges) and be optimized according to where the CTR is most effective and with an potentially a different ad copy for maximizing CTR for each possible site where the "ad" could be viewed, custom made by the users.
This would totally change the web, because ad sponsored sites would wither away. Knowledge sites and needed products would prosper more, and more efficiently with less waste and middle men.
Yeah I think this would be great. It would drastically shrink Google's future.Tension between CR and CTR for advertising!
I am coming after those sites which are wasting their asset, with technology paradigm shifts that put the power in the hands of the readers to vote on what is most relevant.
I remember from my high ($5000+ per month) PPC advertising spending back in 2000-2005:
http://www.ppcsummit.com/newsletter/pay-per-click/ad-copy-isnt-just-text/
While it is never a good idea to optimize ad text exclusively to CTR, if you can maintain or improve your conversion rate (CR) while also increasing CTR, you need to do so.
The problem is you can't always get them both to optimize together.
This is why paid advertising is not always optimum model for maximizing knowledge and prosperity. The CR (coversion ratio of visitors to sales) is what matters most for maximizing knowledge and prosperity. The CTR (the click through ratio of clicks to views of ads) is what Google needs to maximize their revenue.
I have realized that the way to maximize CR is if users could compete to suggest sites they like in the context of other sites, with small writeups. Sort of like blogging on another webpage, e.g. I could blog on Hommel's site or cnn.com, etc. The visitor would decide if they want to view these suggestions. They would be ranked by CTR, but realize that then the CR would always be maximized because visitors wouldn't cost anything (no Google and source site charges) and be optimized according to where the CTR is most effective and with an potentially a different ad copy for maximizing CTR for each possible site where the "ad" could be viewed, custom made by the users.
This would totally change the web, because ad sponsored sites would wither away. Knowledge sites and needed products would prosper more, and more efficiently with less waste and middle men.
Yeah I think this would be great. It would drastically shrink Google's future.Tension between CR and CTR for advertising!
I am coming after those sites which are wasting their asset, with technology paradigm shifts that put the power in the hands of the readers to vote on what is most relevant.
Category Theory is critical to understanding functional programming deeply
The best explanation I have found, which is comprehensible to someone (like me) without a master's degree in category theory, is "Comprehending Monads" by Wadler.
You can Google for it, there is a PDF online.
I am on page 8, and the first 7 pages were very well written and I was able to digest them in about 1 hour, and I can say so far I understand the first 7 pages very well and deeply/thoroughly (I think).
If you want to compare with a more abstract mathematical tutorial, here is a concise one:
http://www.patryshev.com/monad/m-c.html
Or overview:
http://homepages.inf.ed.ac.uk/jcheney/presentations/ct4d1.pdf
http://www.algorithm.com.au/downloads/talks/monads-are-not-scary/monads-are-not-scary-chak.pdf
Btw, Philip Wadler has been in past 2-3 decades, one of the most important researchers in the field of computer science:
http://homepages.inf.ed.ac.uk/wadler/vita.pdf
http://en.wikipedia.org/wiki/Philip_Wadler
You can Google for it, there is a PDF online.
I am on page 8, and the first 7 pages were very well written and I was able to digest them in about 1 hour, and I can say so far I understand the first 7 pages very well and deeply/thoroughly (I think).
If you want to compare with a more abstract mathematical tutorial, here is a concise one:
http://www.patryshev.com/monad/m-c.html
Or overview:
http://homepages.inf.ed.ac.uk/jcheney/presentations/ct4d1.pdf
http://www.algorithm.com.au/downloads/talks/monads-are-not-scary/monads-are-not-scary-chak.pdf
Btw, Philip Wadler has been in past 2-3 decades, one of the most important researchers in the field of computer science:
http://homepages.inf.ed.ac.uk/wadler/vita.pdf
http://en.wikipedia.org/wiki/Philip_Wadler
Last edited by Shelby on Mon Feb 21, 2011 5:07 pm; edited 3 times in total
Being Popular (computer language)
http://www.paulgraham.com/popular.html
Of course, hackers have to know about a language before they can use it. How are they to hear? From other hackers. But there has to be some initial group of hackers using the language for others even to hear about it. I wonder how large this group has to be; how many users make a critical mass? Off the top of my head, I'd say twenty. If a language had twenty separate users, meaning twenty users who decided on their own to use it, I'd consider it to be real.
Getting there can't be easy. I would not be surprised if it is harder to get from zero to twenty than from twenty to a thousand. The best way to get those initial twenty users is probably to use a trojan horse: to give people an application they want, which happens to be written in the new language.
Scala has critical defects; Copute will output to Scala w/o those defects
Copute will initially output to Scala, this is fastest way to get a debugger/IDE for free, and the mapping from Copute to Scala is very straightforward, so time-to-market should be on the order of 3 months. HaXe has faded away as potential target, both for lack of IDE and also missing critical features such as type parameter co-/contra-variance.
Scala (or maybe C#/.Net except Microsoft is dying) is currently the best hope for next mainstream OO+FP language.
Well I have finally gotten to the point where I think I can enumerate the critical things Copute will be able to do, that Scala apparently can not.
And apparently these affect the very ability to be abstract (i.e. reusable and composable), which is Scala main and mnemonic claim of superiority ("Scala is scalable").
http://copute.com/dev/docs/Copute/ref/intro.html#Scala
P.S. If you had read the Copute docs previously, there have been numerous egregious errors corrected hence. Also the quality of the docs has been significantly improved (although still need more improvement).
Scala (or maybe C#/.Net except Microsoft is dying) is currently the best hope for next mainstream OO+FP language.
Well I have finally gotten to the point where I think I can enumerate the critical things Copute will be able to do, that Scala apparently can not.
And apparently these affect the very ability to be abstract (i.e. reusable and composable), which is Scala main and mnemonic claim of superiority ("Scala is scalable").
http://copute.com/dev/docs/Copute/ref/intro.html#Scala
P.S. If you had read the Copute docs previously, there have been numerous egregious errors corrected hence. Also the quality of the docs has been significantly improved (although still need more improvement).
Android is the killer app?
http://esr.ibiblio.org/?p=2975&cpage=3#comment-297417
Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
http://esr.ibiblio.org/?p=2975&cpage=3#comment-297465
Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
Excuse if I'm not omniscient about such matters, so is the case that *nix conquered the server in 2008 on deadline, and Android is the killer client app? The OS became an app in a new abstraction named "cloud computing" and the network became the OS?
http://esr.ibiblio.org/?p=2975&cpage=3#comment-297465
Shelby (as Imbecile but not the same "Imbecile" who posted other comments) wrote:
Excuse if I’m not omniscient about such matters, so is the case that *nix conquered the server in 2008 on deadline, and Android is the killer client app? The OS became an app in a new abstraction named “cloud computing” and the network became the OS?
No. *nix conquered the server long, long ago.
Granted *nix server had majority market share long ago.
ESR cited that 83/23 (78/22%) for new workloads occurred circa 2007, so perhaps the conquering was 90/10% rule (roughly Pareto squared) complete by 2008, thus meeting ESR's deadline.
Is Android the killer app because it paradigm-shifted open source hackers to optimize for hardware without a keyboard-- flatmapping the world closer to the programmers are the users and vice versa? Open source for the masses. On deadline for the roughly 10 year cycle for the imminent arrival of a new programming language for these masses:1975 he started using “structured programming” techniques in assembly language
1983 a new era dawned for him as he started doing some C programming
1994 when he started doing object-oriented programming in C++
2000, he made the switch to Java
Can Java be the language on Android, invalidating the 10 year cycle deadline? Will the next language be the virtual machine with a smorgasbord of grammars?
Tying this to the OODA and martial arts discussion, note that solving a problem by "mapping to a new context" or "passing through an envelope" abstraction, is a monad model and hence the mention of flatmap. Could the next programming language achieve monads-for-dummies?
Last edited by Shelby on Sat Mar 05, 2011 6:06 pm; edited 1 time in total
Scala's standard library may have fundamental semantic errors?
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5289
Shelby wrote:
Shelby wrote:
Perhaps I am missing something, but off top of my head, I am thinking the following is semantically incorrect, because it makes all None equivalent. (Also doesn’t this force Nothing to a supertype of every possible A, so that bind can always be called with the same function whether it is Some or a None?)
- Code:
case object None extends Option[Nothing] {
def bind[B](f: Nothing => Option[B]) = None
}
Is that the way it is implemented in Scala standard library? Seems to me that None should be parametrized too, so that a None for one type (e.g. String) isn’t equal to a None for another which is not covariant (e.g. Int).
- Code:
case class None[+A] extends Option[A] {
def bind[B](f: A => Option[B]) = None[B]
}
Static methods in interface; doing monads correctly for OOP
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5291
Shelby wrote:
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5333
Shelby wrote:
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5334
Note in Copute, Daniel's sequence would be coded:
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5347
Shelby wrote:
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5349
Shelby wrote:
Shelby wrote:
Caveat: none of the following code is tested and I am new to Scala and have never installed the Scala (nor the Java) compiler.
Daniel's "typeclass" is a fully generalized convention for declaring static methods of an interface. Imagine you could declare static methods in a trait with this pseudo-code.
- Code:
trait Monad[+X] {
static def unit[Y] : Y => Monad[Y]
def bind[M <: Monad[_]] : (X => M) => M
}
sealed trait Option[+X] extends Monad[X] {
static def unit[Y]( y : Y ) : Monad[Y] = Some( y )
}
To get legal Scala, this is translated as follows, noting the +, -, or no variance annotation on M depend on where Monad appears in the static methods of Monad.
- Code:
trait Monad[+X] {
def bind[M <: Monad[_]] : (X => M) => M
}
trait StaticMonad[+M[_]] {
def unit[Y] : Y => Monad[Y]
}
sealed trait Option[+X] extends Monad[X] {}
implicit object OptionStaticMonad extends StaticMonad[Option] {
def unit[Y]( y : Y ) : Monad[Y] = Some( y )
}
Before we can add the cases for Option, note that Monad requires "unit" to be invertible, i.e. bijective, but None has no inverse, so we need an injective monad.
- Code:
trait InjMonad[Sub[_] <: InjMonad[Sub[_],X], +X] {
def bind[Y] : (X => Sub[Y]) => Sub[Y]
}
sealed trait Option[+X] extends InjMonad[Option,X] {}
case class Some[+X](value: X) extends Option[X] {
def bind[Y]( f : X => Option[Y] ) : Option[Y] = f( value )
}
case class None[+X] extends Option[X] {
def bind[Y]( f : X => Option[Y] ) : Option[Y] = None[Y]
}
Thus Daniel's sequence.
- Code:
def sequence[M[_], X]( ms : List[M[X]], implicit tc : StaticMonad[M] ) = {
ms.foldRight( tc.unit( List[X] ) ) { (m, acc) =>
m.bind(_) { x =>
acc.bind(_) { tail => tc.unit( x :: tail ) }
}
}
}
Note that syntax is peculiar to Scala, here is a more widely readable version:
- Code:
def sequence[M[_], X]( ms : List[M[X]], implicit tc : StaticMonad[M] ) = {
ms.foldRight( tc.unit( List[X] ), (m, acc) =>
m.bind( x =>
acc.bind( tail => tc.unit( x :: tail ) )
)
)
}
Note my version of Daniels sequence will work with both bijective Monad and injective InjMonad, because the call to bind is a method of the instance; whereas, Daniel's version assumed the injective monad and I see no possible way to fix it using his convention of implicit duck typing of non-static methods. His is an example of how duck typing breaks composability.
==================
**** Monad Theory ****
==================
The best layman's explanation I have found so far is "Comprehending Monads" by Philip Wadler, 1992. Google for the PDF.
Conceptually a monad has three functions:
- Code:
unit : X -> M[X]
map : (X -> Y) -> M[X] -> M[Y]
join: M[M[X]] -> M[X]
The map function might be curried two ways:
- Code:
map : (X -> Y) -> (M[X] -> M[Y])
map : M[X] -> ((X -> Y) -> M[Y]) // Will use this for trait below
We must overload the map function, if M is not same type as N, because otherwise map will not know which "unit" to call (in order to lift Y => M[Y]), because overloading on return type is ambiguous due to covariance:
- Code:
map : (Y -> M[Y]) -> (X -> Y) -> N[X] -> M[Y]
bind : (X -> M[Y]) -> N[X] -> M[Y]
map a b = bind x -> a b x
The reason I rephrased the abstracted monad as a inherited trait with static methods, is so far in my research, I don't agree with a general "implicit" keyword for a language design, because the general use of duck typing can violate the localized single-point-of-truth (SPOT) and can make semantic assumptions that were not intended, because duck typing forces all traits and classes to share the same member namespace, and thus essentially bypasses the behavioral conditions of the Liskov Substitution Principle contract of OOP. Also, since duck typing does not explicitly state which interfaces are required at the SPOT of the trait or class declaration, there is no way to know which interfaces are available by looking in one place. Localization (separation) of concerns is a critical attribute of reusable/scalable software design. Again the following is pseudo-code for the translation of static methods to implicit, but now fully generalized to monad theory.
- Code:
trait Monad[+X] {
static def unit[Y] : Y => Monad[Y]
def bind[M <: Monad[_]] : (X => M) => M
def map[M <: Monad[Y], Y]( a : Y => M, b : X => Y ) : M = bind x => a b x // bind( x => a( b( x ) ) )
static def join[M <: Monad[_], Y] : M[M[Y]] => M[Y]
}
But the above trait won't work for monads whose "unit" is not bijective, i.e. where the inverse of "unit" is lossy, e.g. None option has no inverse. The injective monads thus know which "unit" to call, thus we could add a map to our prior injective monad, which does not input a "unit".
- Code:
trait InjMonad[Sub[_] <: InjMonad[Sub[_],X], +X] {
def bind[Y] : (X => Sub[Y]) => Sub[Y]
def map[Y] : (X => Y) => Sub[Y]
}
sealed trait Option[+X] extends InjMonad[Option,X] {}
case class Some[+X](value: X) extends Option[X] {
def bind[Y]( f : X => Option[Y] ) : Option[Y] = f( value )
def map[Y]( f : X => Y ) : Option[Y] = ObjectStaticMethod.unit( f( value ) ) // Some( f( value ) )
}
case class None[+X] extends Option[X] {
def bind[Y]( f : X => Option[Y] ) : Option[Y] = None[Y]
def map[Y]( f : X => Y ) : Option[Y] = None[Y]
}
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5333
Shelby wrote:
I will offer two improvements to my prior comment-- the prior comment wherein I had proposed a conceptual mapping of pseudo-code "static" interface members to legal Scala syntax.
Note that the StaticMonad trait (in my prior comment) is necessary to enable accessing "statics" on types (e.g. M[_]) that are otherwise unknown due to type erasure (e.g. Daniel's Sequence function example), but StaticMonad is not used for direct invocation of statics, e.g. Option.unit( value ). Thus a necessary improvement is to name object OptionStaticMonad to object Option, which makes trait Option its companion class (or does Scala only allow this if Option is a class?):
- Code:
implicit object Option extends StaticMonad[Option] {
def unit[Y]( y ) = Some( y )
}
Also, to give functionality similar to what we expect for "static" in Java, (some macro or other language to Scala compiler) could automatically generate the statics for each derived class in pseudo-code that did not override them, e.g. as follows although this example seems superfluous, it is not harmful and the generality is needed in other examples.
- Code:
implicit object Some extends StaticMonad[Some] {
def unit[Y]( y ) = Object.unit( y )
}
implicit object None extends StaticMonad[None] {
def unit[Y]( y ) = Object.unit( y )
}
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5334
Expounding on my prior comment, the situations where StaticMonad[M] is employed (versus directly accessing the singleton), type M is unknown, and thus is a more composable abstraction which inverts the control of the access to the singleton statics, and gives that control to the caller:
http://en.wikipedia.org/wiki/Hollywood_Principle
http://lists.motion-twin.com/pipermail/haxe/2011-February/041527.html
Type erasure is an orthogonal issue, that forces the use of an implicit as function parameter, versus the compilation of a separate function body for each possible M in reified languages. Even if Scala was reified, trait StaticMonad is still necessary to abstract the inversion-of-control on singletons. Thus the declaration of implicit instances and parameter is justified by type erasure, but they (along with StaticMonad) could just as well be hidden to make a non-reified language appear to be reified. Which is what I was illustrating with pseudo-code examples.
Note in Copute, Daniel's sequence would be coded:
- Code:
pure sequence<M<X> : Monad<X>, X>( ms : List<M<X>> ) = {
ms.foldRight( M.unit( List<X> ), \m, acc ->
m.bind( \x ->
acc.bind( \tail -> M.unit( tail.append(x) ); );
);
)
}
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5347
Shelby wrote:
A variance annotation on Sub was missing in my prior comment, and should be as follows:
- Code:
trait InjMonad[+Sub[_] <: InjMonad[Sub[_],X], +X] {
def bind[Y, S[Y] >: Sub[Y]] : (X => S[Y]) => S[Y]
}
Without that change, then Some and None would not be subtypes of Option, because Sub was invariant.
Also I am thinking the following is more correct, but I haven't compiled any of my code on this page:
- Code:
trait InjMonad[+Sub[X] <: InjMonad[Sub,X], +X] {
def bind[Y, S[Y] >: Sub[Y]] : (X => S[Y]) => S[Y]
}
I am not sure if that is legal in Scala, but seems to me that is the only way to express that Sub's type parameter is X.
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5349
Shelby wrote:
My prior idea for expressing X to be the Sub's type parameter is not retracted.
However, my suggestion for a covariant annotation on Sub, is erroneous for the reason I had stated in prior comments-- Sub's lifted state may not be invertible (e.g. None has no value of type X) and thus there may be no mapping from a Sub to its supertype. Thus the correction to restore an invariant Sub, and keep the other idea, is:
- Code:
trait InjMonad[Sub[X] <: InjMonad[Sub,X], +X]{
def bind[Y] : (X => Sub[Y]) => Sub[Y]
}
It was incorrect when I wrote that this invariant Sub prevents Some and None subtype of Option. Some#bind and None#bind type signatures contain Option, not Some and None respectively.
I can not think of a useful subtype that would overload bind, but in which case, in my paradigm the subtype could multiply inherit from Monad with distinct Sub. In Daniel's typeclass paradigm this would entail adding another singleton implicit object that inherits from Monad.
Last edited by Shelby on Thu Mar 10, 2011 2:59 am; edited 15 times in total
Americans are innovators/individualists; Europeans are followers/statists?
http://esr.ibiblio.org/?p=2975#comment-297554
http://esr.ibiblio.org/?p=2987#comment-297669
All of our US customers purchased the product because they wanted to do something different with it. Like connecting a tektronix vector graphics terminal, or a numerically controlled underwater welding machine.
All of our European customers purchased the product because they wanted something that worked just like the IBM products but cheaper.
http://esr.ibiblio.org/?p=2987#comment-297669
Overall this Europe = socialism meme is overdone by both sides of the political spectrum in America. It’s more like America is 45% socialist, while European countries vary from say 45-65% (yes, I’d put Switzerland on par with America). And that doesn’t always come in the same places, either. While Sweden is seen by many as the archetypal Euro-socialist state, and it certainly has much higher taxes than USA, it doesn’t, for instance, have a minimum wage, and Britain with its NHS has far less union militancy than the USA seems to.
Last edited by Shelby on Mon Feb 28, 2011 4:57 pm; edited 1 time in total
Pre/postconditions can be converted from exceptions to types
Major breakthrough! I figured out how to convert pre-condition rules to post-conditions on unboxing types! Wow!
Shelby wrote in email:
Shelby wrote in email:
Hi Barbara Liskov, PhD,
Has anyone else done research showing that all pre- and post-conditions can be converted from exceptions to types?
Here is my one paragraph exposition:
http://copute.com/dev/docs/Copute/ref/intro.html#Convert_Exceptions_to_Types
On a related topic on how I applied your principle to: interface vs. mixin, I hope I haven't misapplied your famous Liskov Substitution Principle:
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
My abstract conclusion to incite your interest, "Thus Liskov Substitution Principle effectively states that whether subsets inherit is an undecidable problem.".
Here is a bit more:
"In order to strengthen the semantic design contract, it has been proposed to apply preconditions and postconditions on the variants of the interface. But conceptually such conditions are really just types, and can be so in practice. Thus, granularity of typing is what determines the boundary of semantic undecidability and thus given referential transparency then also the boundary of tension for reusability/composablity. Without referential transparency, granularity increases the complexity of the state machine and this causes the semantic undecidability to leak out (alias) into the reuse (sampling of inheritance) state machine (analogous to STM, thread synchronization, or other referentially opaque paradigms leaking incoherence in concurrency)."
Other places where your LSP is discussed:
http://copute.com/dev/docs/Copute/ref/function.html#Parametrized_Types
http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Type
http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Inheritance
http://copute.com/dev/docs/Copute/ref/intro.html#Scala
Note all of this is a work in progress, so there may numerous mistakes.
Apologies if I have abused your time.
Best Regards,
Shelby Moore III
Void or Nothing never create referencable instances
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5308
Shelby wrote as "III erooM yblehS":
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5313
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5316
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5318
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5321
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5324
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5325
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5327
Shelby wrote as "III erooM yblehS":
For some reason the blog is not accepting my post under my former name. This is very important.
Any and Nothing
Iry's article predicates that List[Nothing], a/k/a Nil, is necessary because List#head would otherwise throw an exception for an empty list. But there is another way, List[T]#head could return an Option[T]. In the way Option is currently structured, that would be an Option[Nothing], but I proposed instead it could be a None[T]. So seems Nothing is not required in this use case? Is there any other compelling reason for Nil? My K.I.S.S. design instincts tell me to avoid unnecessary special case idioms, so I would toss Nil if does not have any other compelling reason to exist. Note the "stupid" in KISS is not the designer, it means do not reduce abstraction with more refined knowledge (more narrow subtypes) than necessary.
Maybe I am mistaken, but in the referentially transparent calculus, a function can never return Nothing (a/k/a Void), because a function must not have side-effects, thus it has no reason to exist (or be called) if does not return a value. Thus an instance of Nothing can never be created in such calculus, and can only be referenced by cast from a contravariant type parameter. In short, Nothing can never be substituted for another type, except as a contravariant type parameter.
If we reason about a type as analogous to an overloaded function where covariance is defined by overloads that have contravariant parameter, and covariant return, types (where all types are just functions), then the Nothing type is the infinite set of functions that take parameters of type Any and return Nothing, and the Any type is the infinite set of functions that take parameters of type Nothing and return Any.
Thus Any and Nothing are only ever strictly necessary as references to a concrete covariant or contravariant instance respectively, because any program can be restructured into a referentially transparent one (e.g. by using a State monad).
The key insight is that a type system is not substitutable in the contravariant direction.
Thus Option[Nothing] or List[Nothing] when used as the return type where there is a covariant type parameter, expresses that the type parameter can be anything. But this violates the contract that Liskov Substitution Principle depends on that a supertype has a greater set of possible subtypes, than any of its potential subtypes. A covariant type system is (injective) one-to-many in the covariant direction, but this is not invertible (bijective) in contravariant direction. Generality does not increase in the contravariant direction, but generality does decrease in the covariant direction.. In short, Nothing must never be on the right-hand side (rhs) of a substitution in a covariant type system (except as a contravariant type parameter). Violation of this rule creates aliasing error, as I explained in my prior comments-- None (a/k/a Option[Nothing]) erases the concrete type parameter of an instance and means literally "I forgot and I am in unknown random state" (no many-to-one mapping allowed in the type system). The Scala compiler should be giving an error when Nothing is substituted for a covariant type parameter in Option[Nothing].
Unlike Any, Nothing should be only a reference type, mean you can substitute any contravariant subtype to it, and cast back to any contravariant subtype from Nothing. You must never substitute Nothing for covariant type. Maybe Scala gets away with this for now due to type erasure (or maybe we can find an edge case where it fails), but it is on shaky ground.
Unless someone can find a hole in my analysis, I am thinking to email this to Odersky or file a bug report for Scala on this. But certainly I must be wrong? This is already well entrenched in Scala. I hope I am wrong.
@anonymous
Hopefully I have explained why the cost of a single None for all T, is semantically erroneous and will break. If I am mistaken, I appreciate anyone that can elucidate.
====================
Okay after further thought, although (so far) I maintain my stance against Nil and Option[None], because Option[T] could suffice for an empty container...
I realize that, None extends Option[Nothing], is not a problem, because it can not cause a reference to an instance of Nothing. Even if there are references to Option[T] for different T, that point to instances of Option[Nothing], there can never be different references pointing to Nothing. Even if we cast the Option[T] references to Option[Nothing] references, then we've lost the ability to cast back to Option[T], but this is not a problem per se-- it is the None designers choice. It is oddity that is not available in the contravariant direction (i.e. with Any), because contravariance only occurs on type parameters. The designer is implying that None are either always equal or never equal, regardless of supertype. Thus, the only potential problem with None, is if #equals does not always return false if either parameter is None (a/k/a Option[Nothing]). Also the orthogonal problem that #get throws an exception.
Click here for a summary that provides context. I excerpt the key portions.
Any (covariant) type, except Nothing, may be substituted for (i.e. assigned to) the abstract type Any (the most general type), i.e. all (covariant) types implicitly inherit from Any. Whereas, an Any must not be substituted for (i.e. assigned to) another (covariant) type, without a runtime cast that has been suitably conditioned to avoid throwing an exception. A Nothing will never be substituted for (i.e. assigned to) another type, because a reference to a Nothing can never be created.
Any contravariant type parameter (only type parameters have the possibility to be contravariant), including Any, may be substituted for (i.e. assigned to) the abstract type Nothing (the least general type, which means "nothing"), i.e. all contravariant types implicitly inherit from Nothing. When a Nothing occurs in the context of inheritance from a covariant, or substitution by a contravariant, type parameter, it will never create a reference to a Nothing because due to substitutability rules a contravariant type parameter can never be returned nor read, and in the covariant inheritance case, an instance of Nothing is never allowed to be constructed, because by definition Nothing does not know which covariance tree branch its instance may be on.
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-1#comment-5313
@Horst
Hello,
I later discovered via trial&error that my posts were not being accepted because I was posting from an IP address that is apparently spam-blocked by the blog service this site uses (WordPress?). So I changed IP addresses (via a VPN I have in UK or USA, whereas I am in Asia), then I posted under the mirrored name, and the new name caused my posts to go into moderator queue (hence they've been released from queue to the site, I assume by Daniel).
Once I realized that I had some lapses in logic in my posts under the mirrored name, I posted the corrected logic under my unmirrored name, on the unblocked IP address, thus my last post was appeared on the site immediately (before the ones in queue).
Some of your reply apparently does not reflect the corrections in logic I made in my last post, i.e. appears you may be replying to the post I made under the mirrored name without acknowledging that I already corrected the logical error you are pointing out.
For example, I think maybe (not sure) you missed the point I was making about Liskov Substitution Principle. First of all, in addition to method parameter and return types, there are several more requirements (click here). Thus, LSP requires that every subtype has fewer possible states than its supertype, and this is exactly what gives rise to the requirements on methods. This will become more clear if you study this link (click here), which illustrates that LSP really applies to the set of possible states.
Thus I was pointing out that Nothing is a subtype of every possible class, thus it has more possibilities than any supertype, except Any. Thus my first post asserted that it violated LSP if assigned in a covariant position (and actually that is still true, if the assigned not as a type parameter and if assigned as instance, but this can never occur). But then in my last point, I made the point that if Nothing can never be created as instance (as you say a "blank"), then LSP does not apply in that respect, because Nothing is never the type of a method parameter, nor an assignable return value. We thus see Nothing only ever applies for type parameters or return type that is never assigned. Thus the problem of Nothing is avoided. Agreed, thus there is no aliasing error because the number of instances of Nothing created is exactly 0, which is what I explained in my last post.
Disagree, None should not equal None, because if I have two Option[T] references for different T, both of which point at a None, then they should not report they are equal.
My point about Iry's article was not that Nothing is unnecessary in every use case, only to state that if List#head returns Option[T] then we have to unbox it with "match-case" to get at the T value. Thus whether None is an Option[Nothing] or None[T] is arbitrary design choice. Either way, we have to do a match-case and handle the possibility of an empty List.
List may or may not return an Option[T] (I haven't checked), yet if not, I can code a Scala library where List#head returns an Option[T]. We are not forced to use the standard library. We are free to explore all possibilities and choose. It helps to make the standard library better, if we question everything, and accept only what passes our best analysis. I think Linus Torvalds stated, as paraphrased by Eric Raymond, "Given enough eyeballs, all suboptimal things ('bugs') are shallow". I have not yet formed a final opinion, discussion is part of discovery and analysis.
Nothing (a/k/a void) is only needed for return types that never return (which means they don't apply in a strictly referentially transparent system like Haskell if an error monad is employed, and they never create instances in a system with side effects like Scala, Java, etc), or for type parameters, which also means an instance is never created. So reasoning about bottom types are not really needed, except as a supertype for contravariant type parameters. All other cases were handled without thinking and teaching void to be a "bottom type" (although that is what is was). This is important for me, because I am putting much thought into how to simplify Scala and make it more palatable for the masses. That is why you are seeing the genre of analysis from me that you are-- I am questioning everything. It is my job to do so.
Friendly reminder, your use of argument and parameter is reversed from their definitions. The parameter is in the declaration of the function, the argument is in the function call (apply).
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5316
@Horst
Hi again,
Adding members to a subtype does not increase the # of potential LSP states, as the states that LSP is referring to are types in the inheritance hierarchy, not data in the types.
Nothing has infinite possible LSP states, just as Any does. The only reason we can use Nothing without breaking LSP, is because there can never exist a reference to Nothing. Nothing is not a referencable type. Try creating a reference with the type of Nothing in Scala, I expect it won't compile.
val any : Nothing = any instance you like
And that is why I find it misleading to use Nothing in None, because it causes confusion for anyone who thinks of LSP fundamentally (not that many people do, but the concept of a bottom type is foreign to most OOP programmers). But on further thought, one eventually realizes that Nothing can be used as a type parameter without ever creating a reference to Nothing. Imagine trying to teach this to Java programmers? I am thinking I will rather hide Nothing as Void (ah everbody is comfortable with void as "nothing" for return type and functions that take no parameters), and avoid ever mentioning it for use as a type parameter, except for contravariant type parameters, which are rare because they can never be read. Generality is nice and dandy, except when the Java community might have to expend 10,000 man-hours to deal with all the confusion about "what is Nothing?" for a concept that only is necessary for 0.001% of the use-cases.
I am not in my last 2 posts (or include this one, so 3) asserting that Option[T] can not point to None, instead of None[T]. I am saying that None should never report true for #equals. Lets not conflate orthogonal issues.
Please check my logic here. None should never equal itself. Let me repeat as an example:
val os : Option[String] = None()
val ol : Option[List] = None()
if( os == ol ) // Do I have the same list or string?
That assumes that Some[T] implements #equals by comparing the value it stores.
Apparently Haskell does have a bottom type, but you would only use it when your monad called some system function that could never return:
http://www.haskell.org/haskellwiki/Bottom
That is why I said you wouldn't encounter it if you used an error monad that would instead spool (chain) all errors back out to the main return, i.e. that an error call would simply unwind the function stack, and not cause a non-returning function.
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5318
I summarized it more concisely now:
Void is not a referencable type, it is only used to express inheritance (covariance) relationships, or the absence of function parameters and/or a return value.
Replace Void with Nothing for Scala.
Off topic: Copute separates trait into interface and mixin, where mixin is not a referencable type, so the concept of non-referencable type is not so esoteric.
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5321
@Horst
Thanks for running the test, because I have never run a Java, Scala, or Haskell compiler or interpreter in my life (I do it all in my head for now, because it forces me to think more deeply).
So that looks like a bug to me, as I expected. It should report false, but it reports true. The null bug doesn't make semantic sense to me either. All of those look like bugs to me. I vaguely remember having seen that null bug mentioned as a bug in general language design circles.
Nothing has infinite potential subtypes in the contravariant type parameter case. In the covariant case, it has infinite potential supertypes. But this does not cause a problem because it can never appear as an assignable reference in any covariant or contravariant case, and LSP is all about reference substitutability (you can refer that link I gave before about LSP as sets of possible states). Thus LSP does not apply to Nothing, because we are never substituting any reference with or from it. Nothing is only used for expressing inheritance relationships, or for indicating no value, never for actual substitutions.
Agreed on having a sound type system with no edge cases that fail.
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5324
@Daniel
In the case of general programming and logic constructions, of course you are correct, that we need to distinguish for functions between:
a) never returns, does not exist (Nothing, undefined in JavaScript)
b) returns but no value, exists but has no value (Void, Unit, null in JavaScript)
a) non-callable/unreachable (Nothing, undefined method in JavaScript)
b) called with no parameter (Void, Unit parameter)
Thanks for raising the issue.
But remember, I made the subtle (and probably buried) point that in a 100% referentially transparent, statically typed language, we never have functions that don't return, nor functions that don't return a value, nor functions that are non-callable, although we may have functions which are called with no parameter-- they are the functional equivalent of constants. So in that genre of language where there is no overlap between them, there is no need to have separate Void and Nothing, so one could decide call them by the same name (what I meant by "hiding Nothing in Void" since the inheritance uses are so esoteric and rare), and then use Void semantics for functions and Nothing semantics for inheritance, but always with one keyword Void.
We can relax the requirements for non-overlap. We just require to never have functions that don't return (notice that Copute disallows thrown Exceptions, use an error monad if you want to unwind the function state to terminate), nor functions that are non-callable.
In the Curry-Howard isomorphism, Bottom corresponds to a false formula (and Void a true formula), which means any program that contains a Bottom with respect to a function, is not a true formula in the corresponding logic mapping. Thus eliminating Bottom in function context, makes our programs true proofs of themselves. A false formula introduces undecidability.
If I say that Void is unreferencable, then it is consistent with a unit type for use in functions. It is also consistent with a nothing (bottom) type in inheritance.
I hope I didn't make a egregious error, but that seems correct to me at this groggy moment 18 hours into my work day.
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5325
@Horst
If I remember correctly, the legacy treatment of null is due to implicit conversion to boolean false, so that if( object ) tests are less verbose than if( object != null ). So false == false is true.
One problem in many languages is that null is hard-wired to every type, so we are forced to test for null every where. Modern languages supply the Option type (Exception monad) instead, which we can them employ selectively on only the types we need exceptions on.
If the semantics of null is "does not exist", then why should ("does not exist" : Option[String]) == ("does not exist" : Option[List]) be true? For that matter, hy should ("does not exist" : Option[String]) == ("does not exist" : Option[String]) be true?
The programmer is asking of equals in that context, "are these pointing to the identical pair of Some instances?". Not asking "are these pointing to the identical pair of either both Some or both None/null instances, but not one of Some and one of None/null?".
Shouldn't the answer should agree with the question.
Very sleepy, hope this was coherent. Probably I can not add anything more of significance on this issue of None#equals.
http://www.codecommit.com/blog/ruby/monads-are-not-metaphors/comment-page-2#comment-5327
@Horst and @anonymous
Even though I originally (in my early comments) noticed that None is an object, somehow it escaped me, and then I didn't correlate your point a few comments ago, that the key advantage of,
case object None extends Option[Nothing]
versus,
case class None[+T] extends Option[T]
is due to the object versus class, the former only needs one instance of None for the entire program, which will be much more efficient when passing these Option types everywhere we would need a null in other languages. I of course originally thought the single object was a problem, until I realized the distinction between Object[Nothing] and the inability to create a reference to Nothing, and the orthogonal issue being the equality comparison result.
So I fully capitulate that the first way is superior. Until someone can elucidate otherwise, I think equality comparison for None should always return false, even when the other operand is None.
Thanks to both of you and Daniel for all the help. I hope this has been stimulating or otherwise helpful in some way. Apologies if we veered from the monad theme of this article.
My original main point was to show that the bind operator portion of a monad can be an inherited trait instead of an implicit typeclass. I am happy the discussion forked, because I gained numerous language design insights.
re: Every 10 years we need a new programming language paradigm
The blog author has been programming since 1969, the commentary is that Scala allows too much complexity (unreadable, write-only language), but that a subset of Scala could be, and they are asking for Copute's planned feature-set:
http://alarmingdevelopment.org/?p=562 (note I suggested HaXe on that thread, because Copute isn't ready yet)
Every 10 years we need a new programming language paradigm:
https://goldwetrust.forumotion.com/t112p120-computers#4141
Included in my point is that Android may be the killer-app, not OS:
https://goldwetrust.forumotion.com/t112p135-computers#4233
============
Is JavaScript the next mainstream programming language?
http://www.richardrodger.com/2011/04/05/the-javascript-disruption/
Well not server-side, and it lacks referential transparency ("immutability"):
http://blog.objectmentor.com/articles/2008/12/29/a-wish-list-for-the-next-mainstream-programming-language
Note that most widespread language is Intel assembly code. It is possible that the next mainstream language could be one that compiles to JavaScript to run on clients, i.e. JavaScript is not a high-level language (e.g. GWT compiles Java to JavaScript) as it lacks some critical features discussed below.
Here are pertinent articles on the next big mainstream language:
http://www.jroller.com/scolebourne/entry/the_next_big_jvm_language1
http://eugenkiss.com/blog/2010/the-next-big-language/ (Note Copute can do Rust's Typestate)
http://steve-yegge.blogspot.com/2007/02/next-big-language.html
http://lambda-the-ultimate.org/node/1277
===================
Current language rankings:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
===================
http://itc.conversationsnetwork.org/shows/detail4764.html
While criticizing the verbosity and complexity of current high-level languages, Rob Pike gives his version of history:
http://alarmingdevelopment.org/?p=562 (note I suggested HaXe on that thread, because Copute isn't ready yet)
Every 10 years we need a new programming language paradigm:
https://goldwetrust.forumotion.com/t112p120-computers#4141
Shelby wrote:http://creativekarma.com/ee.php/weblog/about/In 1975 I started using “structured programming” techniques in assembly language, and became a true believer.
In 1983 a new era dawned for me as I started doing some C programming on Unix and MS-DOS. For the next five years, I would be programming mixed C/assembly systems running on a variety of platforms including microcoded bit-slice graphics processors, PCs, 68K systems, and mainframes. For the five years after that, I programmed almost exclusively in C on Unix, MS-DOS, and Windows.
Another new era began in 1994 when I started doing object-oriented programming in C++ on Windows. I fell in love with OO, but C++ I wasn’t so sure about. Five years later I came across the Eiffel language, and my feelings for C++ quickly spiraled toward “contempt.”
The following year, 2000, I made the switch to Java and I’ve been working in Java ever since.
About now, it time for the one that follows Java (the virtual machine, garbage collection, no pointers, everything is an object) paradigm.
Included in my point is that Android may be the killer-app, not OS:
https://goldwetrust.forumotion.com/t112p135-computers#4233
============
Is JavaScript the next mainstream programming language?
http://www.richardrodger.com/2011/04/05/the-javascript-disruption/
Well not server-side, and it lacks referential transparency ("immutability"):
http://blog.objectmentor.com/articles/2008/12/29/a-wish-list-for-the-next-mainstream-programming-language
Note that most widespread language is Intel assembly code. It is possible that the next mainstream language could be one that compiles to JavaScript to run on clients, i.e. JavaScript is not a high-level language (e.g. GWT compiles Java to JavaScript) as it lacks some critical features discussed below.
Here are pertinent articles on the next big mainstream language:
http://www.jroller.com/scolebourne/entry/the_next_big_jvm_language1
http://eugenkiss.com/blog/2010/the-next-big-language/ (Note Copute can do Rust's Typestate)
http://steve-yegge.blogspot.com/2007/02/next-big-language.html
http://lambda-the-ultimate.org/node/1277
===================
Current language rankings:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
===================
http://itc.conversationsnetwork.org/shows/detail4764.html
While criticizing the verbosity and complexity of current high-level languages, Rob Pike gives his version of history:
@ 4:30min Rob Pike, co-creator of Go language wrote:How did we get here?
1) C and UNIX became dominant in research.
2) The desire for higher-level languages lead to C++, which grafted the Simula style of OOP onto C. It was a poor fit, but since it compiled to C, it brought high-level programming to UNIX.
3) C++ became the language of choice in parts of industry and in many universities.
4) Java arose as a cleaner, stripped down C++.
5) By the late 1990s, a teaching language was needed that seemed relevant, and Java was chosen.
Last edited by Shelby on Wed Jun 22, 2011 9:23 am; edited 12 times in total
"Why the future doesn't need us." - I disagree
Tangentially, note there is a more complete (adds images and important links) and easier-to-read version of the essay Understand Everything Fundamentally.
http://www.wired.com/wired/archive/8.04/joy.html
Bill Joy, cofounder and Chief Scientist of Sun Microsystems, was cochair of the presidential commission on the future of IT research, and is coauthor ofThe Java Language Specification.
Here he quotes the famous Ray Kurzweil, who quoted Theodore Kaczynski, the Unibomber:
Let us do something about that. I think humans will always be smarter than machines, the machines are just tools.
Tsk tsk. Bill, please I think you are smarter than that. A computer is never smarter than the program it was given. A computer can not program itself, and never will be able to, because it can't be made truly sentient. Creativity can not be modeled, ever. It can be emulated, but only what has been created in past can be emulated. The future creativity belongs to man...well actually...oh never mind you wouldn't believe me any way...
The collective creativity of mankind far outstrips the number of programs that could ever be written, because for one reason, Russell's Paradox says there is no set that does not include itself. It doesn't matter how many millions of times faster you make the CPU, it is not the speed or memory capacity of an individual human brain that should be compared, but rather the fact that the computer hardware is only as smart as the humans that program it-- regardless how fast the hardware CPU is. Faster hardware CPUs will make humans much smarter. For example, my external memory is Google.
Bill, you committed the typical Mathusian mistake. You are as fooled as the ignorant Peak Oilers and Global Warmists. Please come back to your roots and senses.
Math proof: computers can never exceed human creativity
https://goldwetrust.forumotion.com/t112p120-computers#4183
Shelby wrote:
The Emperor's New Mind, by Roger Penrose
http://en.wikipedia.org/wiki/Orch-OR
The creativity of the mind is unbounded non-determinism, meaning that the disorder (i.e. # of possibilities) in the universe is always increasing. Thus, the mind can never be entirely described by any static algorithm. If they make a computer to model the synapses structure, then then will need to model subatomic processes that occur within the brain, which are then unbounded nondeterminism. It is impossible to put unbounded determinism in an algorithm-- life itself requires that life can not know itself, because every definable set includes itself (Russell's Paradox, ironically Russell was an atheist and couldn't see the implication of his paradox!).
This reminds me of an article I read today where scientists found an exploded star so dense (60 billion tons per teaspoon) that its neutrons have all aligned and so they are seeping out without any hole for them to seep out through.
Yet again, my Theory of Everything, which is that the universe wraps back onto itself in terms of entropy, has shown itself to explain everything.
(tangentially point: actually it is our perception/measurements which are always finding new possibilities, the universe may already have infinite possibilities)
The following widely accepted principle also supports the above conclusions:
http://en.wikipedia.org/wiki/Uncertainty_principle
Roger Penrose explains more in a video:
https://www.youtube.com/watch?v=yFbrnFzUc0U
A beginning of space-time doesn't exist
In the following video, he is getting closer to my theory, but he misses the point that we can't go back in time, because we would need infinite time to do so, because time in the past sees our clock as slower-- infinitely slower if we want to go back to antiquity:
https://www.youtube.com/watch?v=pEIj9zcLzp0
==================
ADD: to those idiot commentators at the Amazon.com book link to Emperor's New Mind above, yes a computer can be subject to unbound non-determinism and it is known as a "bug". You fools entirely missed the point that the non-determinism is what the algorithms are not. An algorithm is static, dead, not unbounded-- which is precisely why every program will always have "bugs" forever. Even if the bug is that it can't interact with every external state.
http://www.amazon.com/review/R1B73KYRB2LYOP/ref=cm_cr_rev_detmd_pl?ie=UTF8&cdMsgNo=23&cdPage=3&asin=0192861980&store=books&cdSort=oldest&cdMsgID=Mx3AZ9JR4LN385E#Mx3AZ9JR4LN385E
Shelby wrote:
==================
Professor Chomsky,
In all due respect for your expertise in the field of linguistics, I am not surprised that in your interview on faith ( https://www.youtube.com/watch?v=ewP5tNLBb2E ), that you would quote Bertrand Russell to support an irrational denial of Russell's own Paradox:
https://goldwetrust.forumotion.com/t112p135-computers#4264
P.S. I admire your many logical statements in various interviews. I simply suppose you've been lacking a key insight. So that is why I emailed you. Hope you are not perturbed by my audacity.
Shelby Moore III
===============
ADD: Rat Brain Modelers Denounce IBM's Cat Brain Simulation as "Shameful and Unethical" Hoax
The Blue Brain project leader says that IBM's simulated brain does not even reach an ant's brain level
http://www.popsci.com/technology/article/2009-11/blue-brain-scientist-denounces-ibms-claim-cat-brain-simulation-shameful-and-unethical
==================
http://www.caseyresearch.com/cdd/brain-vs-computer?active-tab=archives
http://www.wired.com/wired/archive/8.04/joy.html
Bill Joy, cofounder and Chief Scientist of Sun Microsystems, was cochair of the presidential commission on the future of IT research, and is coauthor ofThe Java Language Specification.
Here he quotes the famous Ray Kurzweil, who quoted Theodore Kaczynski, the Unibomber:
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them.
[...]
Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system.
Let us do something about that. I think humans will always be smarter than machines, the machines are just tools.
Bill Joy wrote:the fact that the most compelling 21st-century technologies - robotics, genetic engineering, and nanotechnology - pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate
Tsk tsk. Bill, please I think you are smarter than that. A computer is never smarter than the program it was given. A computer can not program itself, and never will be able to, because it can't be made truly sentient. Creativity can not be modeled, ever. It can be emulated, but only what has been created in past can be emulated. The future creativity belongs to man...well actually...oh never mind you wouldn't believe me any way...
The collective creativity of mankind far outstrips the number of programs that could ever be written, because for one reason, Russell's Paradox says there is no set that does not include itself. It doesn't matter how many millions of times faster you make the CPU, it is not the speed or memory capacity of an individual human brain that should be compared, but rather the fact that the computer hardware is only as smart as the humans that program it-- regardless how fast the hardware CPU is. Faster hardware CPUs will make humans much smarter. For example, my external memory is Google.
Bill, you committed the typical Mathusian mistake. You are as fooled as the ignorant Peak Oilers and Global Warmists. Please come back to your roots and senses.
Math proof: computers can never exceed human creativity
...The collective creativity of mankind far outstrips the number of programs that could ever be written, because for one reason, Russell's Paradox says there is no set that does not include itself...
https://goldwetrust.forumotion.com/t112p120-computers#4183
Shelby wrote:
Russell's Paradox: there is no rule for a set that does not cause it to
contain itself, thus all sets are infinitely recursive.
[...]
And follows all the other theorems which all derive and relative to the 2nd law of thermodynamics which says the universe is always trending to maximum disorder (i.e. maximum possibilities):
* Liskov Substition Principle: it is an undecidable problem that subsets
inherit.
* Linsky Referencing: it is undecidable what something is when it is
described or perceived.
* Coase Theorem: there is no external reference point, any such barrier
will fail.
* Godel's Theorem: any formal theory, in which all arithmetic truths can
be proved, is inconsistent.
* 1856 Thermo Law: entire universe (a closed system, i.e. everything)
trends to maximum disorder.
The Emperor's New Mind, by Roger Penrose
http://en.wikipedia.org/wiki/Orch-OR
...Godel's Theorem showed that the brain had the ability to go beyond what could be achieved by axioms or formal systems. This would mean that the mind had some additional function that was not based on algorithms (systems or rules of calculation). A computer is driven solely by algorithms. Penrose asserted that the brain could perform functions that no computer could perform, known as "non-computable" functions.
Penrose went on to consider what it was in the human brain that might not be driven by algorithms. The physical law is described by algorithms, so it was not easy for Penrose to come up with physical properties or processes that are not described by them. He was forced to look to quantum theory for a plausible candidate.
[...]
The quantum waves are essentially waves of probability, the varying probability of finding a particle at some specific position
[...]
the choice of position for the particle is random.
The creativity of the mind is unbounded non-determinism, meaning that the disorder (i.e. # of possibilities) in the universe is always increasing. Thus, the mind can never be entirely described by any static algorithm. If they make a computer to model the synapses structure, then then will need to model subatomic processes that occur within the brain, which are then unbounded nondeterminism. It is impossible to put unbounded determinism in an algorithm-- life itself requires that life can not know itself, because every definable set includes itself (Russell's Paradox, ironically Russell was an atheist and couldn't see the implication of his paradox!).
This reminds me of an article I read today where scientists found an exploded star so dense (60 billion tons per teaspoon) that its neutrons have all aligned and so they are seeping out without any hole for them to seep out through.
Yet again, my Theory of Everything, which is that the universe wraps back onto itself in terms of entropy, has shown itself to explain everything.
(tangentially point: actually it is our perception/measurements which are always finding new possibilities, the universe may already have infinite possibilities)
The following widely accepted principle also supports the above conclusions:
http://en.wikipedia.org/wiki/Uncertainty_principle
Roger Penrose explains more in a video:
https://www.youtube.com/watch?v=yFbrnFzUc0U
A beginning of space-time doesn't exist
In the following video, he is getting closer to my theory, but he misses the point that we can't go back in time, because we would need infinite time to do so, because time in the past sees our clock as slower-- infinitely slower if we want to go back to antiquity:
https://www.youtube.com/watch?v=pEIj9zcLzp0
==================
ADD: to those idiot commentators at the Amazon.com book link to Emperor's New Mind above, yes a computer can be subject to unbound non-determinism and it is known as a "bug". You fools entirely missed the point that the non-determinism is what the algorithms are not. An algorithm is static, dead, not unbounded-- which is precisely why every program will always have "bugs" forever. Even if the bug is that it can't interact with every external state.
http://www.amazon.com/review/R1B73KYRB2LYOP/ref=cm_cr_rev_detmd_pl?ie=UTF8&cdMsgNo=23&cdPage=3&asin=0192861980&store=books&cdSort=oldest&cdMsgID=Mx3AZ9JR4LN385E#Mx3AZ9JR4LN385E
Shelby wrote:
I see that you have entirely missed the point.
Godel's theorem is fundamental, not some straw-man abstraction based on an initial axiom. Let me rephrase it as "any formal theory, in which all arithmetic truths can be proved, is inconsistent". Essentially it is a restatement of Russell's Paradox, which I phrase "there is no rule for a set that does not cause it to contain itself, thus all sets are infinitely recursive". Try as you might until forever, you will never find one exception to Russell's Paradox.
Several other theorems say the same thing in different contexts:
* Liskov Substition Principle: it is an undecidable problem that subsets
inherit.
* Linsky Referencing: it is undecidable what something is when it is
described or perceived.
* Coase Theorem: there is no external reference point, any such barrier
will fail.
* 1856 Thermo Law: entire universe (a closed system, i.e. everything)
trends to maximum disorder (maximum possibilities).
A computer can also be subject to unbounded non-determinism (the phenomenon that Penrose explains gives rise to ever changing human creativity) and it is known as a "bug". Unbounded non-determinism is what static algorithms are not. An algorithm is static, dead, not unbounded-- which is precisely why every program will always have "bugs" forever. Even if the bug is that the algorithm can't interact with every possible external state, which due to 2nd law of thermo, possible states of the universe are always increasing.
Olly you are looking for an explanation of what consciousness is, but nothing can be alive and explain the rules for what it is. And that is fundamental. So fundamental that it actually proves that science can never measure creation. Go to my site goldwetrust.forumotion.com and read more in the Technology and Knowledge sections.
I won't be checking back here, contact me at my site if you want to discuss it further.
==================
Professor Chomsky,
In all due respect for your expertise in the field of linguistics, I am not surprised that in your interview on faith ( https://www.youtube.com/watch?v=ewP5tNLBb2E ), that you would quote Bertrand Russell to support an irrational denial of Russell's own Paradox:
https://goldwetrust.forumotion.com/t112p135-computers#4264
P.S. I admire your many logical statements in various interviews. I simply suppose you've been lacking a key insight. So that is why I emailed you. Hope you are not perturbed by my audacity.
Shelby Moore III
===============
ADD: Rat Brain Modelers Denounce IBM's Cat Brain Simulation as "Shameful and Unethical" Hoax
The Blue Brain project leader says that IBM's simulated brain does not even reach an ant's brain level
http://www.popsci.com/technology/article/2009-11/blue-brain-scientist-denounces-ibms-claim-cat-brain-simulation-shameful-and-unethical
IBM's claim of simulating a cat cortex generated quite a buzz last week, but now the head researcher from the Blue Brain project, a team that is working to simulate its own animal brain (a rat's), has gone incandescent with fury over the what he calls the "mass deception of the public."
Henry Markram leads the Blue Brain project that successfully simulated a self-organizing slice of rat brain at the École Polytechnique Fédérale de Lausanne in Switzerland. He has issued a point-by-point denouncement of the cat claim that bubbles with outrage at IBM Almaden's Dharmendra Modha.
"There is no qualified neuroscientist on the planet that would agree that this is even close to a cat's brain," Markram writes in his e-mail to IBM. "I see he [Modha] did not stop making such stupid statements after they claimed they simulated a mouse's brain."
Markram calls the IBM simulation a "hoax and a PR stunt" that any parallel machine cluster could replicate. He adds that creating a billion interactive virtual neuron points represents no meaningful achievement as far as simulating intelligence, but merely reflects the brute supercomputing power at IBM's disposal.
"We could do the same simulation immediately, this very second by just loading up some network of points on such a machine, but it would just be a complete waste of time -- and again, I would consider it shameful and unethical to call it a cat simulation," Markram says. He suggested that IBM's simulation feat does not even reach the levels of ant intelligence.
The Blue Brain researcher concludes by expressing his shock at IBM and DARPA's support of the virtual feline brain, and says that he would have expected an ethics committee to "string Modha up by his toes." Yikes.
Still, Markram has a point. Creating any sort of artificial intelligence has long represented a difficult and arduous process, and so expecting a miracle breakthrough seems unlikely. Perhaps we should have paid more attention to the novel Good Omens, where Hell's agent Crowley owns "an unconnected fax machine with the intelligence of a computer and a computer with the intelligence of a retarded ant." To add some more perspective, that book was published back in 1990.
animemaster
11/23/09 at 4:43 pm
Why cant they simulate every cell down to the molecule and fold of each preteen. Now that would be a show of brute force worth wowing over.
Abandonfish
11/23/09 at 8:09 pm
Even comparing it to an Ant's brain is quite a leap, a point neuron simulation is definitely not capable of the behavioral intelligence displayed by insects.
==================
http://www.caseyresearch.com/cdd/brain-vs-computer?active-tab=archives
As to processor speed, let’s assume a very conservative average firing rate for a neuron of 200 times per second. If the signal is passed to 12,500 synapses, then 22 billion neurons are capable of performing 55 petaflops (a petaflop = one quadrillion calculations) per second.
The world’s fastest supercomputer, a monster from Japan unveiled by Fujitsu at a conference this past June, has a configuration of 864 racks, comprising a total of 88,128 interconnected CPUs. It tested out at 8 petaflops (which only five months later was upped to 10.51 petaflops). Our brains are nearly five times faster.
But that’s not even half the story. Unlike transistors locked into place on their silicon wafers, synaptic connections can and do move over time, creating an ever-shifting environment where the possible hookups are, for all practical purposes, limitless. Furthermore, there are another 78 billion neurons, give or take, outside of the cortex, hard at work on other complex functions.
The wiring complexity of our brains alone means that in the crude terms we understand computers today, our brains are much more complex than anything we’ve built, and still faster than even the most expensive supercomputer ever built.
On top of that, we are only beginning to understand the complexity of that wiring. Instead of one-to-one connections, some theorists postulate that there are potentially thousands of different types of inter-neuronal connections, upping the ante. Moreover, recent evidence points to the idea that there is actually subcellular computing going on within neurons, moving our brains from the paradigm of a single computer to something more like a self-contained Internet, with billions of simpler nodes all working together in a massive parallel network. All of this may mean that the types of computing we are capable of are only just being dreamt of by computer scientists.
Will our electronic creations ever exceed our innate capabilities? Almost certainly. Futurist Ray Kurzweil predicts that there will be cheap computers with the same capabilities as the brain by 2023. To us, that seems incredibly unlikely.
Last edited by Shelby on Thu Feb 09, 2012 12:37 am; edited 21 times in total
Page 6 of 11 • 1, 2, 3 ... 5, 6, 7 ... 9, 10, 11
Page 6 of 11
Permissions in this forum:
You cannot reply to topics in this forum