Computers:
5 posters
Page 4 of 11
Page 4 of 11 • 1, 2, 3, 4, 5 ... 9, 10, 11
rebuttal to "Internet Kill Switch"
Thanks for posting that!
Yes we have a battle coming between the State and the individual.
I urge people to come up to speed on the centralization that is dying:
https://goldwetrust.forumotion.com/knowledge-f9/book-ultimate-truth-chapter-6-math-proves-go-forth-multiply-t159-15.htm#3748
https://goldwetrust.forumotion.com/economics-f4/changing-world-order-t32-105.htm#3788
goldwave wrote:From Paul Rosenberg the CEO of Cryptohippie USA, the leading provider of Internet anonymity.
http://cryptohippie.com/
http://www.lewrockwell.com/orig11/rosenberg-p1.1.1.html
- It is impractical for them to enforce that proposed wiretap backdoor legislation, people will simply move to P2P and rogue anonymous software for doing so. This will be effective against large, popular sites though.
- "Internet kill switch" is technically impractical, because TCP/IP is self-healing and will route around any networks that are taken down. They can kill major arteries, but the internet will go virally P2P in a very short time.
- Technically, SecureBGP (BGPSEC) can't be widely implemented because it won't scale well beyond the major arteries. Ad hoc routing with TCP/IP will route around it, if it becomes a block (nature sees it as non-functional and routes around due to Coase's Theorem). The fact that BGP is P2P now, means that it will be impossible to go back to making it centralized.
- Regarding intellectual property policing, a decentralized DNS is feasible and will be incentivized by the govt's fascism.
- All of this is like the Napster experience-- the more the authorities attacked, the more P2P alternatives popped up and the more people that participated in downloading music for free. The govt is powerless (as usual), but they will hold sway over the large sites and arteries.
- "Computer health certificate" is so impossible, I really doubt the competence of the author of the link you provided.
- Cloud computing can be P2P, I am working on it: https://goldwetrust.forumotion.com/knowledge-f9/book-ultimate-truth-chapter-6-math-proves-go-forth-multiply-t159-15.htm#3640
Yes we have a battle coming between the State and the individual.
I urge people to come up to speed on the centralization that is dying:
https://goldwetrust.forumotion.com/knowledge-f9/book-ultimate-truth-chapter-6-math-proves-go-forth-multiply-t159-15.htm#3748
https://goldwetrust.forumotion.com/economics-f4/changing-world-order-t32-105.htm#3788
Re: Computers:
I own the domain "Copute.com".
Cooperative computing, or software that cooperates like legos to build anything anyone wants to accomplish in software.
I visualize a fundamental radical acceleration of the way software can progress by being pure functional and thus re-useable on a finer granularity without refactoring tsuris:
http://www.chrismartenson.com/blog/prediction-things-will-unravel-faster-than-you-think/45297?page=7#comment-91068
http://copute.com/dev/docs/Copute/ref/function.html#Purity
https://goldwetrust.forumotion.com/t159p15-book-ultimate-truth-chapter-6-math-proves-go-forth-multiply#3640
I realized that Linus Torvalds (genius creator of Linux and primary factor in open source phenomenon) said about "address space" separation is fundamentally a call for pure functioning programming of the entire system, including the end user software:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=66630&threadid=66595&roomid=2
Some background on my prior thoughts on that:
http://jasonhommelforum.com/forums/showthread.php?p=55539#post55539
http://jasonhommelforum.com/forums/showthread.php?p=55512#post55512
What I has gelled in my mind now, is that there are 2 related challenges:
1. Resource access control must be as fined grained as the re-usable units of software. Otherwise, the re-use is not bijective and the lossy interoperation will cascade (domino).
2. Re-use must be pure functional, as any state machine at the highest level will be re-useable only the extent that the state-machine is provable and can be factored into the re-use.
Now to turn this into working code with a market...
=======================
Let me explain a key conclusion of what Linus said:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=66630&threadid=66595&roomid=2
What he is saying is that fact that most programming languages (other than estoric Haskell, Erlang, etc) create software that is not pure functional (referentially transparent), and thus the coordination of the interoption of these programs is lossy (incoherent, you don't know what the referential state machine dependencies are). This is why he says you can do not better than put this operating system (e.g. Windows, Max X OS, Linux, etc) coordination into a giant spaghetti monolithic kernel. However, the real problem is that the lack of the 2 items I enumerated above.
We need to change the way we write and design software, to make pure functional lego building blocks, with matching granularity of resource access control (permissions). This inherently addresses the security issues too, such as DDoS:
http://jasonhommelforum.com/forums/showthread.php?p=55539#post55539
And the ability to make secure websites secure on the client:
http://www.marketoracle.co.uk/Article22098.html
==================
More from Linus on coherency challenges:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=66656&threadid=66595&roomid=2
Linus is correct that your I/O and data structures (i.e. you state-machine) can not be distributed. Monolithic kernels attempt to filter the aliasing error of the lossy coherency.
What I am saying is that we need a computer language (my proposed Copute) that enables us to delineate between the pure functional and the stateful portions of our software, so that we can re-use the former in other software and only have to focus our coherency error challenges to the known stateful portions.
For example, pure functional code can be interrupted at any time without causing any coherency (race) issues. As I said, the stateful portions should be high-level (the outermost functions of our software).
==================
Linus on security:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=67181&threadid=66595&roomid=2
Linus is again correct. Security is inherent in the pure functional software, but the security holes are in the stateful portions (where we have coherency, aka interoption, challenge). That is why we need a language for model the separation of two software paradigms in the same system.
This comes for free in the pure functional portions, as the language enforces that the software is coherency agnostic. I am trying to find the post where Linus admits such possibility for the future. Ah here is one post where he admits to attack the coherency issues from the language layer:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=67198&threadid=66595&roomid=2
And here he says it again more generally:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=67213&threadid=66595&roomid=2
Cooperative computing, or software that cooperates like legos to build anything anyone wants to accomplish in software.
I visualize a fundamental radical acceleration of the way software can progress by being pure functional and thus re-useable on a finer granularity without refactoring tsuris:
http://www.chrismartenson.com/blog/prediction-things-will-unravel-faster-than-you-think/45297?page=7#comment-91068
http://copute.com/dev/docs/Copute/ref/function.html#Purity
https://goldwetrust.forumotion.com/t159p15-book-ultimate-truth-chapter-6-math-proves-go-forth-multiply#3640
I realized that Linus Torvalds (genius creator of Linux and primary factor in open source phenomenon) said about "address space" separation is fundamentally a call for pure functioning programming of the entire system, including the end user software:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=66630&threadid=66595&roomid=2
Some background on my prior thoughts on that:
http://jasonhommelforum.com/forums/showthread.php?p=55539#post55539
http://jasonhommelforum.com/forums/showthread.php?p=55512#post55512
What I has gelled in my mind now, is that there are 2 related challenges:
1. Resource access control must be as fined grained as the re-usable units of software. Otherwise, the re-use is not bijective and the lossy interoperation will cascade (domino).
2. Re-use must be pure functional, as any state machine at the highest level will be re-useable only the extent that the state-machine is provable and can be factored into the re-use.
Now to turn this into working code with a market...
=======================
Let me explain a key conclusion of what Linus said:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=66630&threadid=66595&roomid=2
Linus Torvalds wrote:Anybody who has ever done distributed programming should
know by now that when one node goes down, often the rest
comes down too. It's not always true (but neither is it
always true that a crash in a kernel driver would bring
the whole system down for a monolithic kernel), but it's
true enough if there is any kind of mutual dependencies,
and coherency issues.
And in an operating system, there are damn few things that
don't have coherency issues. If there weren't any coherency
issues, it wouldn't be in the kernel in the first place!
(In contrast, if you do distributed physics calculations,
and one node goes down, you can usually just re-assign
another node to do the same calculation over again from
the beginning. That is not true if you have a
really distributed system and you didn't even know where
the data was coming from or where it was going).
What he is saying is that fact that most programming languages (other than estoric Haskell, Erlang, etc) create software that is not pure functional (referentially transparent), and thus the coordination of the interoption of these programs is lossy (incoherent, you don't know what the referential state machine dependencies are). This is why he says you can do not better than put this operating system (e.g. Windows, Max X OS, Linux, etc) coordination into a giant spaghetti monolithic kernel. However, the real problem is that the lack of the 2 items I enumerated above.
We need to change the way we write and design software, to make pure functional lego building blocks, with matching granularity of resource access control (permissions). This inherently addresses the security issues too, such as DDoS:
http://jasonhommelforum.com/forums/showthread.php?p=55539#post55539
And the ability to make secure websites secure on the client:
http://www.marketoracle.co.uk/Article22098.html
==================
More from Linus on coherency challenges:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=66656&threadid=66595&roomid=2
Linus Torvalds wrote:>To be fair even in monolithic kernels it is not easy
>to share data given races on SMP/preemption etc.
Nobody claims that threaded programming is easy.
But it is a hell of a lot easier if you can use a lock
and access shared data than if you have to use some
insane distributed algorithm. It's usually many orders
of magnitude easier in monolithic kernels.
Synchronizing some data in a monolithic kernel may
involve using a lock, or special instructions that do
atomic read-modify-write accesses. Doing the same in
a microkernel tends to involve having to set up a whole
protocol for communication between the entities that
need to access that data, or complex replication schemes
(or they just end up mapping it into every process space,
just to avoid the problem. Problem solved, by just
admitting that separate address spaces was a mistake)
>Usually you need to design a locking protocol, care
>about livetime issues etc. It's all not simple there
>neither.
Nobody says kernels are easy. We're talking purely about
the relative costs. And microkernels are harder.
Much harder.
>I always admired how elegant some of parallel Erlang
>programs look and they use message passing
>so you certainly can do some stuff cleanly with
>messages too.
Not efficiently, and not anything complex.
It's easy and clean to use messages if you don't have
any truly shared data modified by both entities.
But the whole point of a kernel tends to be about shared
data and resources. Memory pressure? How do you free
memory when you don't know what people are using it for?
You can try to do an OS in Erlang. Be my guest. I'll be
waiting (and waiting.. The point being - you can't do
a good job).
Let's try an analogy. I'm not sure it's a great analogy,
but whatever:
In the UNIX world, we're very used to the notion of having
many small programs that do one thing, and do it well. And
then connecting those programs with pipes, and solving
often quite complicated problems with simple and independent
building blocks. And this is considered good programming.
That's the microkernel approach. It's undeniably a really
good approach, and it makes it easy to do some complex
things using a few basic building blocks. I'm not arguing
against it at all.
BUT IT IS NOT REALISTIC FOR ALL PROBLEMS. It's a really
really good way to solve certain problems. I use
pipelines all the time myself, and I'm a huge believer.
It works very well indeed, but it doesn't work very well
for everything.
So while UNIX people use pipelines for a lot of important
things, and will absolutely swear by the "many small
and independent tools", there are also situations where
pipelines will not be used.
You wouldn't do a database using a set of pipes, would you?
It's not very efficient, and it's no longer a simple flow
of information. You push structured data around, and you
very much will want to access the database directly (with a
very advanced caching mapping system) because not doing so
would be deadly.
Or, to take another example: you may well use a typesetting
system like LaTeX in a "piped" environment, where one tool
effectively feeds its input to another tool (usually through
a file, but hey, the concept is the same). But it's not
necessarily the model you want for a word processor, where
the communication is much more back-and-forth between the
user and the pieces.
So the "many small tools" model works wonderfully well,
BUT IT ONLY WORKS FOR A NICELY BEHAVED SUBSET OF YOUR
PROBLEM SPACE. When it works, it's absolutely the right
way to do things, since you can re-use components. But
when it doesn't work, it is just a very inconvenient
model, and while it's certainly always possible to
do anything in that model (set up bi-directional sockets
between many different parts), you'd have to be crazy to
do it.
And that's a microkernel. The model works very well for
some things. And then it totally breaks down for others.
Linus is correct that your I/O and data structures (i.e. you state-machine) can not be distributed. Monolithic kernels attempt to filter the aliasing error of the lossy coherency.
What I am saying is that we need a computer language (my proposed Copute) that enables us to delineate between the pure functional and the stateful portions of our software, so that we can re-use the former in other software and only have to focus our coherency error challenges to the known stateful portions.
For example, pure functional code can be interrupted at any time without causing any coherency (race) issues. As I said, the stateful portions should be high-level (the outermost functions of our software).
==================
Linus on security:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=67181&threadid=66595&roomid=2
Linus Torvalds wrote:For the last year or so, the new magic world has been
"security". It appears as totally bogus as the "ease of
maintenance" tripe was (and for largely the same reasons:
security bugs are often about the interactions
between two subsystems, and the harder it is to write
and maintain, the harder it is to secure).
I'm sure that ten years from now, it will be something else.
There's always an excuse.
Linus is again correct. Security is inherent in the pure functional software, but the security holes are in the stateful portions (where we have coherency, aka interoption, challenge). That is why we need a language for model the separation of two software paradigms in the same system.
Linus Torvalds wrote:The whole "make small independent modules" thing just sounds
like manna from heaven when you're faced with creating an
OS, and you realize how daunting a task that is. At
that point, you can either sit back and enjoy the ride
(which I did - partly because I didn't initially really
realize how daunting it would be), or you can seek mental
solace in an idea that makes it sound easier than it is.
This comes for free in the pure functional portions, as the language enforces that the software is coherency agnostic. I am trying to find the post where Linus admits such possibility for the future. Ah here is one post where he admits to attack the coherency issues from the language layer:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=67198&threadid=66595&roomid=2
And here he says it again more generally:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=67213&threadid=66595&roomid=2
Linus Torvalds wrote:As I already alluded to (and expanded on in my reply to
myself), I actually think using "designer languages" may
be a much better solution to the modularity problem than
the microkernel approach. The C model is very much one
where you have to do all the modularity by hand (which
Linux does, btw - don't get me wrong).
Economic model for Copute ($50 x 7 billion = $350 billion per year)
http://esr.ibiblio.org/?p=2813&cpage=5#comment-290515
And if you think all 7 billion won't have a computer soon, read this:
http://esr.ibiblio.org/?p=2835
http://tech.fortune.cnn.com/2010/12/22/2011-will-be-the-year-android-explodes/
=========================================
Economic Model For Sharing Revenue With Contributors
=========================================
Copute will profile a statistically accurate sample of the actual CPU time consumed by each code contribution. The gross revenue can be shared by proportion CPU time. Additionally, this information will also drive software developers (who build programs from these code bases) to select the code that has the best performance (least CPU time usage), which will drive competition among contributors. Since these will be pure functional code contributions (per Copute's unique paradigm shift breakthrough) then a contributor can improve upon an existing code contribution (and congruency is verifiable due to the pure functional referentially transparency), and if for example they reduce the CPU time by say 90%, then they receive 90% of the revenue generated by their contribution, and the prior code contributor will receive 10%. The competitor will be able to market their contribution to software developers, by attaching theirs to the former contribution and software developers can choose (we might even be able to automate unit testing to verify congruency of outputs between two contributions).
The bottom line is the user will not see any of this, they will just see an exponential increase in the rate of software innovation.
Shelby wrote:>>This really resolves to who makes money in the ecosystem, the app vendors (iOS) or the handset makers (Android).
>
> You came in late, so maybe you haven’t seen my analysis of Google’s grand strategy. You should probably read this and this for starters.
Check my logic please?
Some users are unwilling to pay $50+ for a new OS every time they buy a new computer, unless it is hidden in the cost of the new computer, because they are not getting any significant innovation. Evidence the number of users preferring to run Windows XP SP2, especially in the developing world as it is much easier to steal than Vista or Windows 7-- just patch it and turn off Windows Update.
Closed-source code strangles itself because it fights against innovation of itself. The vested interest is in the past, not in the optimum future.
>> You don’t think in the future that people will have similar amounts of money (relative to the cost of the unit) tied up into their phones as they do for their computers?
>
> No, because I have yet to buy an app. So far, everything I’ve been able to identify that I wanted has been available for free.
Let me peer into a possible future.
Selling many little innovations separately doesn't scale, i.e. users can't be bothered with the hassle of micro-payments, e.g. imagine making a $0.0001 fractional cents payment decision on every URL clicked, implying Apple's AppStore isn't capitalizing on but a fraction of potential innovation. And large open source projects don't scale revenue sharing out to multitudes of random contributions (thus are not maximizing potential innovation). Imagine instead $12 a year (total for all the apps they use) from each of 100s of millions of users of a computer or smart phone, with that revenue distributed proportionally to every contributor, given some unifying (open source!) paradigm for monetizing those myriad of innovations. Imagine that increasing to $50+ per year, as competition between applications forced unification into that paradigm, thus increasing the value to the user. The userbase and the value (cost) per user would be both be increasing. Imagine that paradigm won because of an (key fundamental computing theory) economic benefit of a "designer" programming language that rendered existing languages and operating systems (including Linux) uncompetitive, precisely because it enabled fine-grained contribution scaling.
And if you think all 7 billion won't have a computer soon, read this:
http://esr.ibiblio.org/?p=2835
http://tech.fortune.cnn.com/2010/12/22/2011-will-be-the-year-android-explodes/
=========================================
Economic Model For Sharing Revenue With Contributors
=========================================
Copute will profile a statistically accurate sample of the actual CPU time consumed by each code contribution. The gross revenue can be shared by proportion CPU time. Additionally, this information will also drive software developers (who build programs from these code bases) to select the code that has the best performance (least CPU time usage), which will drive competition among contributors. Since these will be pure functional code contributions (per Copute's unique paradigm shift breakthrough) then a contributor can improve upon an existing code contribution (and congruency is verifiable due to the pure functional referentially transparency), and if for example they reduce the CPU time by say 90%, then they receive 90% of the revenue generated by their contribution, and the prior code contributor will receive 10%. The competitor will be able to market their contribution to software developers, by attaching theirs to the former contribution and software developers can choose (we might even be able to automate unit testing to verify congruency of outputs between two contributions).
The bottom line is the user will not see any of this, they will just see an exponential increase in the rate of software innovation.
Face recognition software and Facebook
Put your photo online, and the computer can identify you in any video or other photo. Facebook is able to automate the labeling of names in photos now.
And here is what the police will do with that technology:
http://www.marketoracle.co.uk/Article25179.html
And here is what the police will do with that technology:
http://www.marketoracle.co.uk/Article25179.html
Copute Milestone: fixed 6 known ambiguities in other major languages
Non-Context Free Grammars cause human errors in programming. This is fundamental and has been causing me (and other programmers) to repeat silly bugs for 25 years (Murphy's Law applies even when one is experienced):
http://copute.com/dev/docs/Copute/ref/llk.html#Context_Free_Grammar
Copute removes these known Context-Free ambiguities.
1. Dangling else.
Note Python is not entirely Context Free.
2. Ambiguity between return and following expression.
3. Ambiguity between prefix and postfix unary ++ and --
4. Ambiguity between infix and unary - and +
The LL(k) grammar compiler tells me about all such ambiguities, but it is up to me to design the above fixes.
Copute also removes these known terminal semantic ambiguities, which are caused by giving terminals (e.g. '(' and '+') different semantic meaning in different contexts where two of the contexts can occur simultaneously:
5. Ambiguity between grouping and function call.
6. Ambiguity between number add and string concatenation operator.
If you know of any more terminal semantic ambiguities in other major languages, please let me know as soon as possible before I finalize the Copute grammar.
http://copute.com/dev/docs/Copute/ref/llk.html#Context_Free_Grammar
Copute removes these known Context-Free ambiguities.
1. Dangling else.
Note Python is not entirely Context Free.
2. Ambiguity between return and following expression.
3. Ambiguity between prefix and postfix unary ++ and --
4. Ambiguity between infix and unary - and +
The LL(k) grammar compiler tells me about all such ambiguities, but it is up to me to design the above fixes.
Copute also removes these known terminal semantic ambiguities, which are caused by giving terminals (e.g. '(' and '+') different semantic meaning in different contexts where two of the contexts can occur simultaneously:
5. Ambiguity between grouping and function call.
6. Ambiguity between number add and string concatenation operator.
If you know of any more terminal semantic ambiguities in other major languages, please let me know as soon as possible before I finalize the Copute grammar.
Python's conflation of indenting and grammar
Now on to Python, a computer language created in the 1990s, that is now gaining some momentum and popularity...
Haskell also conflates layout and grammar.
Here is how Python solved the nested if-else ambiguity, by conflating indenting and grammar, by making indenting a syntactical unit:
One problem with that is it doesn't make the following unambiguous, therefore the following is illegal in Python:
Copute can allow the above as follows, and even though Python would not be able to allow the above when the "else:" is present, it should be able to allow it as above when "else:" is not present, but does neither. Note that since Python is using indenting to declare statement group (aka "block"), i.e. makes the newline+indent the block start and newline+outdent the block end (whereas Copute uses braces to delimit blocks), then it can not do Copute's context free solution:
Copute is not conflated with indenting, so the following works and has equivalent meaning in Copute also:
Thus the programmer's choice of visual layout is not constrained by grammar (layout is not conflated with grammar), which enables a code renderer to re-layout (aka reflow) code automatically, which for example would be necessary to make very wide lines wrap on a small smart-phone screen. Note I envision a coming smart-phone that will have a second foldout screen that is juxtaposed against the main screen making your iPhone screen twice as wide in the narrowest direction. So the width of text displayed will be different depending if you are displaying on your desktop wide screen, your iPhone narrow, your iPhone widened with foldout, or your iPad.
However, I think Python can be automatically reflowed too, because indenting is required to create a new block, thus lines that are too long can be wrapped to the next line at same indent level, without changing the meaning of the code. And single line if-else constructs can automatically wrapped to new lines with indenting. Also these automatic mappings can be done in the inverse, to accommodate wider screens.
Thus, I am leaning towards adopting Python's conflation of layout and grammar, since it appears to be invertible and bijective, and it adds a slight readability and slightly lower verbosity advantage (see below for examples). But if I do, I would still require braces for the single line if-else case (see why above), and allow the option for braces for the single line 'if' and 'while' (there is no 'for' in Copute) cases (see why below). And note that adopting this might change how I have decided to resolve the 6 ambiguities in other languages that I written about in prior post.
Tangentially, notice that Python requires colons after statements and semicolons between expressions, which Copute does not:
The advantage for Copute is you don't have to remember to insert those. In Python those semicolons have a higher grouping precedence than the colon, thus the above is equivalent to the following in Copute:
I don't see any advantage in verbosity for Python, and it is a heck of a lot more clear in Copute what the grouping is.
And Python's method is extremely subject to human error, because just one of those tiny semicolons missing (accidentally, hey I can barely see those tiny things), screws up the entire line:
This is equivalent to following in Copute.
The following ambiguity exists in both Python and Copute:
In both cases, it if equivalent to in Copute:
Here is the justification from Guido, the creator of Python:
ideosyncracies means idiosyncrasies
As you can see in the examples above, Python is not significantly shorter than Copute.
As I wrote above, I am thinking that Python's use of the newline and indent as syntactical units, can be automatically reflowed, just as using braces can.
Whereas, if the code could not be reflowed for layout into different screen widths and lengths, then that is loss of freedom is propagating into domino tsuris. For example, say my iPhone screen is too narrow to accommodate the length of the lines in some Python code, then the lines can not be wrapped to fit inside my narrow screen, because this would change the intended meaning (semantics) of the code. I will be forced to scroll horizontally and scroll vertically, which is extremely difficult for a human (try it, read a paper book with piece of cardboard and a hole cut in the shape of your iPhone screen).
And the claimed verbosity and lack of homogeneity disadvantage for Copute's bracing (for blocks) is very minor, if not insignificantly pendantic.
Other than the dangling if-else case, bracing is not needed in Copute when there isn't more than one line in the block, e.g.
And thus only 2 extra characters (the { and }) when there are 2+ lines in a block, e.g.
The various optional layouts for bracing in Copute include the following examples:
Whereas in Python always:
But in theory, Copute code can always be automatically reflowed to the preferred style of the reader, so thus compare the above Python to this in Copute:
Or if you prefer always:
That slight readability advantage for Python, becomes more and more insignificant as the number of lines in a block increases and, if it comes at the cost of having layouts which can not be reflowed, is a loss of freedom both for the programmer who prefers a different layout style and for rerendering to differing window sizes.
What a joke, but the sad part is that Guido did not (at least at that time) even realize that object-oriented programming with dynamic typing can never be referentially transparent (aka context-free) and thus is not reusable:
http://esr.ibiblio.org/?p=2491#comment-276772 ("Jocelyn" is me, read all my comments on page)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/function.html#Purity
Python is always a dynamically typed language, so it is hopeless for wide-scale (meaning positive scaling law, listen at 4:20 and 15:00) reusability (aka compositions, mashups).
Guido did not understand what drives reusability. It is all about eliminating external context dependencies (aka futures contracts).
The whole point is that functions should be context-free (i.e. only depend on their inputs), and made as small as possible so they can be reused by other functions.
So then the whole point of (e.g. Copute's) reusability is we don't want to force others to read the code inside a function just to know what it is doing. The statically typed (even if inferred typing) function interfaces (for functions that do not access external context, nor internally store context with closures, generators or 'static' keyword) is self-documenting (even inferred types can be displayed by the compiler), and thus don't force everyone to load all the code (inside the functions) of the entire world in their head. Domino gridlock is to be avoided, not promoted.
The misunderstanding of Guido is promulgated by many in the open source movement, who latched on to the rallying cry, "more eyes, mean shallower bugs". See my first link above, where I (alias "Jocelyn") caught Eric Raymond (the self-proclaimed 160 IQ spokesman of the open source movement who coined the above phrase) in a logic error on this. Btw, I was forced to use "Jocelyn", because Eric Raymond banned "Shelby" (and lately he banned "Jocelyn"). I will get the last word in the marketplace!
More eyes on the referentially transparent function interfaces is what we need, not huge monolithic open source code bases that can't be easily forked or understood by any contributors other than the core insiders.
I will change the world. That is why I gave my Christmas to this Copute. (There was a beautiful lady that I let down this holidays!)
Again what an ironic joke, that Guido didn't realize that Python can never be pure nor clear, because it can't be referentially transparent.
In this missive, he is extrapolating that making bark clear, allows one to see over the forest.
Again he failed to understand that context-freedom is necessary to prevent aliasing. For example, if conflating layout (indenting) and grammar is not invertible and bijective, it may cause the aliasing error of not being able to view code in narrow screens (although afaics so far, Python's conflation of layout and grammar is invertible and bijective).
Based on recent essays from Guido, I tend to think he may still not understand.
Look I don't like when people call me out and I don't like to call others out, except when I try to help people and they ban me. I originally did not want to create Copute. I was making suggestions on how to improve HaXe (which is closest so far to what I want), and the creator of HaXe, Nicolas Cannasse banned me.
I am not trying to embarrass Guido, but I want to make it clear that I have real reasons for being forced to create my own language. I wanted somebody else to do it, but after 25 years, I got tired of waiting and pleading and being banned.
Pros and Cons of Python's Indenting
I will discuss these in order of least important and least controversial first.
1. NEUTRAL: Python contains no do-while, only has 'while'[ cited ]. This appears to be because the Python developers did not want to follow the (or the indention block that follows the) 'do' with a 'while' that ends with no colon (colon requires a following expression or block) for ideological consistency in Python's look-and-feel[ cited ]. They also considered a form with 'while:' (including the colon) that would have an optional following expression or block, but no where else in Python is a colon followed by "nothing"[ cited ] (i.e. "nothing" would I guess be a new line at same indent or outdented). Afaics, it would be possible to design Copute's grammar such that indention changes within expressions are consumed but ignored. And thus I don't see why the outdent to the 'while' couldn't be consumed within a compound do-while statement. The issue for Python was not context-free grammar related, rather ideological consistency. I think it would also be possible to have Copute implement that proposed optional block following 'while'[ cited ], which is known as "loop and a half" pattern[ cited ], but in Copute it would introduce the same ambiguity as dangling expression on 'return' (so semicolon is always required). In any case, do-while (even the complete "loop and a half" form) is not much less verbose than an equivalent construct using 'while-true' and 'if-break'[ cited ], so it is difficult to make a strong case for augmenting the traditional C-like 'do-while'.
2. NEUTRAL: Deeply nested (i.e. indented) code blocks can not be outdented, thus forcing horizontal scrolling[ cited ]. Outdenting such a case forces horizontal scrolling too. Besides the deep nested cases are usually more elegantly handled (and potentially more reusable too) with sub-functions and potentially recursion or standard list operations such a 'map', 'fold', etc.
3. CON: Python code could be rendered unusable when posted on a forum or web page that removes whitespace[ cited ]. HTML collapses juxtaposed whitespace into a single space by default. Although this can be classified as user error (i.e. not enclosing within <pre></pre> tags), languages that do not conflate indenting and grammar wouldn't be subject to this user error (Murphy's Law).
4. PRO: Impossible to obfuscate the meaning of a program by using bogus indentations[ cited ]. Note this must be combined with a line continuation token (e.g. '...' in Python), otherwise this would be simultaneously a PRO and a CON[ cited ]. Obfuscation means that one can imply in the indenting that the program is doing a different meaning, than what the program's grammar sees. I had given several examples of that within my prior post about correcting 6 ambiguities in other languages. Note that some people see obfuscation as a feature, i.e. to distribute JavaScript code that is so unreadable (i.e. entire program squashed into one line, all unnecessary spaces removed, etc) that it is not practical to steal the code. However, other forms of obfuscation, e.g. replacing identifiers with randomized words, is not prevented. And whitespace obfuscation is invertible and bijective to any preferred rendering of the grammar, so in theory someone could write an automated code restoration tool, thus the value of whitespace obfuscation is dubious.
5. CON: Tabs and spaces can not be safely mixed[ cited ], because the interpretation of the width of the tab character is not carried with all possible file types that can contain tabs. Thus if tabs and spaces are simultaneously allowed for indenting in same file (not allowed of Python 3), then indenting alignments (and thus meaning of the program) can be lost as code is copied around or opened in different tools. But languages which don't conflate indenting and grammar, have a similar but less serious (because only reader's meaning changes, not the actual execution meaning) problem with loss of indenting alignment when mixing tabs and spaces, which is they don't have #4 PRO above. Since Python 3 prevents mixing tabs and spaces for indenting, and since other languages also suffer from such mixing (thus they should also prevent it), then I would like to rate this as a NEUTRAL, but there is the problem that other tools can silently introduce tabs and user is faced with tsuris and they don't even know why or how to fix it[ cited ]. Also I want to make a related but slightly orthogonal point, that any tabs for indenting is bad, because indenting alignments can change when tab widths do. Thus I would advocate that tabs are never allowed for indenting, and this is even more critical for a language (e.g. Python) which conflates indentation and grammar and thus relies on indentation alignment for execution semantics.
6. CON: Having accidental superfluous space can silently change the meaning of code which follows[ cited ], e.g.
Can you see that the first "print x" is one space indented more then the rest of the lines that follow? But will you notice it when it is buried in a page of 1000 lines of dense code? So the meaning of that code is actually as follows:
Consider this example:
It actually means:
Conclusion and Decision
Even though I had intuitive theoretical misgivings about conflating whitespace and grammar before I wrote this post, I mildly appreciated Python's goal of consistent indenting of blocks without the pollution of braces. So I opened my mind to consider the pros and cons.
Upon evaluating the pros and cons, the human error (Murphy's Law) cons introduced are overwhelming, even the one pro case requires polluting the page with line continuation tokens ('...'). However, what really sealed my decision to not emulate Python's conflation, is I realized just now, that if Copute's grammar is well-defined with braces, then the user's editor can hide the braces when the indenting agrees, and/or the editor can enforce the indenting, not show the braces, and save the file with the appropriate braces. Thus all the cons of Python are avoided, the one pro is also achieved, and all the stated benefits of Guido's goal are achieved entirely.
So looks like I improved upon Guido's vision. :wink:
I seem to be serially good at that sort of "math visualization" success, perhaps that is why all the IQ tests say I am. I mean I would just like a little bit of mutual respect from my peers (as in not banning people who disagree with you, because they might be correct and you can't see it), not bragging (well maybe I am a little, but Copute is still vaporware).
=======
ADD: looks like Guido had a similar realization 13 years ago, but didn't act on it:
Shelby at jasonhommelforum.com wrote:...People rave about Python's conflation of indenting and grammar. Geez that is a step backwards to 30 years ago. Copute does it mathematically correct (all conflation removed from the grammar -> semantic translation layer) as per the prior post...
Haskell also conflates layout and grammar.
Here is how Python solved the nested if-else ambiguity, by conflating indenting and grammar, by making indenting a syntactical unit:
- Code:
if foo:
if bar:
print 'Inner True'
else:
print 'Inner False'
else:
print 'Outer False'
One problem with that is it doesn't make the following unambiguous, therefore the following is illegal in Python:
- Code:
if test1: if test2: print x
Copute can allow the above as follows, and even though Python would not be able to allow the above when the "else:" is present, it should be able to allow it as above when "else:" is not present, but does neither. Note that since Python is using indenting to declare statement group (aka "block"), i.e. makes the newline+indent the block start and newline+outdent the block end (whereas Copute uses braces to delimit blocks), then it can not do Copute's context free solution:
- Code:
if test if test2 print x
if test {if test2 print x} else print y
Copute is not conflated with indenting, so the following works and has equivalent meaning in Copute also:
- Code:
if test
if test2 print x
if test {
if test2 print x
} else
print y
Thus the programmer's choice of visual layout is not constrained by grammar (layout is not conflated with grammar), which enables a code renderer to re-layout (aka reflow) code automatically, which for example would be necessary to make very wide lines wrap on a small smart-phone screen. Note I envision a coming smart-phone that will have a second foldout screen that is juxtaposed against the main screen making your iPhone screen twice as wide in the narrowest direction. So the width of text displayed will be different depending if you are displaying on your desktop wide screen, your iPhone narrow, your iPhone widened with foldout, or your iPad.
However, I think Python can be automatically reflowed too, because indenting is required to create a new block, thus lines that are too long can be wrapped to the next line at same indent level, without changing the meaning of the code. And single line if-else constructs can automatically wrapped to new lines with indenting. Also these automatic mappings can be done in the inverse, to accommodate wider screens.
Thus, I am leaning towards adopting Python's conflation of layout and grammar, since it appears to be invertible and bijective, and it adds a slight readability and slightly lower verbosity advantage (see below for examples). But if I do, I would still require braces for the single line if-else case (see why above), and allow the option for braces for the single line 'if' and 'while' (there is no 'for' in Copute) cases (see why below). And note that adopting this might change how I have decided to resolve the 6 ambiguities in other languages that I written about in prior post.
Tangentially, notice that Python requires colons after statements and semicolons between expressions, which Copute does not:
- Code:
if x < y < z: print x; print y; print z
The advantage for Copute is you don't have to remember to insert those. In Python those semicolons have a higher grouping precedence than the colon, thus the above is equivalent to the following in Copute:
- Code:
if x < y < z {print x print y print z}
I don't see any advantage in verbosity for Python, and it is a heck of a lot more clear in Copute what the grouping is.
And Python's method is extremely subject to human error, because just one of those tiny semicolons missing (accidentally, hey I can barely see those tiny things), screws up the entire line:
- Code:
if x < y < z: print x; print y print z
This is equivalent to following in Copute.
- Code:
if x < y < z {print x print y} print z
The following ambiguity exists in both Python and Copute:
- Code:
if x < y < z: print x print y print z # Python
- Code:
if x < y < z print x print y print z // Copute
In both cases, it if equivalent to in Copute:
- Code:
if x < y < z {print x} print y print z
Here is the justification from Guido, the creator of Python:
Any individual creation has its ideosyncracies,
ideosyncracies means idiosyncrasies
and occasionally its creator has to justify these. Perhaps Python's most controversial feature is its use of indentation for statement grouping, which derives directly from ABC. It is one of the language's features that is dearest to my heart. It makes Python code more readable in two ways. First, the use of indentation reduces visual clutter and makes programs shorter, thus reducing the attention span needed to take in a basic unit of code.
As you can see in the examples above, Python is not significantly shorter than Copute.
Second, it allows the programmer less freedom in formatting, thereby enabling a more uniform style, which makes it easier to read someone else's code. (Compare, for instance, the three or four different conventions for the placement of braces in C, each with strong proponents.)
As I wrote above, I am thinking that Python's use of the newline and indent as syntactical units, can be automatically reflowed, just as using braces can.
Whereas, if the code could not be reflowed for layout into different screen widths and lengths, then that is loss of freedom is propagating into domino tsuris. For example, say my iPhone screen is too narrow to accommodate the length of the lines in some Python code, then the lines can not be wrapped to fit inside my narrow screen, because this would change the intended meaning (semantics) of the code. I will be forced to scroll horizontally and scroll vertically, which is extremely difficult for a human (try it, read a paper book with piece of cardboard and a hole cut in the shape of your iPhone screen).
And the claimed verbosity and lack of homogeneity disadvantage for Copute's bracing (for blocks) is very minor, if not insignificantly pendantic.
Other than the dangling if-else case, bracing is not needed in Copute when there isn't more than one line in the block, e.g.
- Code:
while x < y
print x
print end
And thus only 2 extra characters (the { and }) when there are 2+ lines in a block, e.g.
- Code:
while x < y {
print y
print x
}
print end
The various optional layouts for bracing in Copute include the following examples:
- Code:
while x < y {
print y
print x
}
while x < y
{
print y
print x
}
while x < y
{
print y
print x
}
if x < y {
print y
print x
} else print
Whereas in Python always:
- Code:
while x < y:
print y
print x
if x < y:
print y
print x
else print
But in theory, Copute code can always be automatically reflowed to the preferred style of the reader, so thus compare the above Python to this in Copute:
- Code:
while x < y {
print y
print x
}
if x < y {
print y
print x
} else print
Or if you prefer always:
- Code:
while x < y {
print y
print x }
if x < y {
print y
print x
} else print
That slight readability advantage for Python, becomes more and more insignificant as the number of lines in a block increases and, if it comes at the cost of having layouts which can not be reflowed, is a loss of freedom both for the programmer who prefers a different layout style and for rerendering to differing window sizes.
This emphasis on readability is no accident. As an object-oriented language, Python aims to encourage the creation of reusable code.
What a joke, but the sad part is that Guido did not (at least at that time) even realize that object-oriented programming with dynamic typing can never be referentially transparent (aka context-free) and thus is not reusable:
http://esr.ibiblio.org/?p=2491#comment-276772 ("Jocelyn" is me, read all my comments on page)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/function.html#Purity
Python is always a dynamically typed language, so it is hopeless for wide-scale (meaning positive scaling law, listen at 4:20 and 15:00) reusability (aka compositions, mashups).
Even if we all wrote perfect documentation all of the time, code can hardly be considered reusable if it's not readable.
Guido did not understand what drives reusability. It is all about eliminating external context dependencies (aka futures contracts).
The whole point is that functions should be context-free (i.e. only depend on their inputs), and made as small as possible so they can be reused by other functions.
So then the whole point of (e.g. Copute's) reusability is we don't want to force others to read the code inside a function just to know what it is doing. The statically typed (even if inferred typing) function interfaces (for functions that do not access external context, nor internally store context with closures, generators or 'static' keyword) is self-documenting (even inferred types can be displayed by the compiler), and thus don't force everyone to load all the code (inside the functions) of the entire world in their head. Domino gridlock is to be avoided, not promoted.
The misunderstanding of Guido is promulgated by many in the open source movement, who latched on to the rallying cry, "more eyes, mean shallower bugs". See my first link above, where I (alias "Jocelyn") caught Eric Raymond (the self-proclaimed 160 IQ spokesman of the open source movement who coined the above phrase) in a logic error on this. Btw, I was forced to use "Jocelyn", because Eric Raymond banned "Shelby" (and lately he banned "Jocelyn"). I will get the last word in the marketplace!
More eyes on the referentially transparent function interfaces is what we need, not huge monolithic open source code bases that can't be easily forked or understood by any contributors other than the core insiders.
I will change the world. That is why I gave my Christmas to this Copute. (There was a beautiful lady that I let down this holidays!)
Many of Python's features, in addition to its use of indentation, conspire to make Python code highly readable. This reflects the philosophy of ABC, which was intended to teach programming in its purest form, and therefore placed a high value on clarity.
Again what an ironic joke, that Guido didn't realize that Python can never be pure nor clear, because it can't be referentially transparent.
In this missive, he is extrapolating that making bark clear, allows one to see over the forest.
Readability is often enhanced by reducing unnecessary variability. When possible, there's a single, obvious way to code a particular construct. This reduces the number of choices facing the programmer who is writing the code, and increases the chance that will appear familiar to a second programmer reading it.
Again he failed to understand that context-freedom is necessary to prevent aliasing. For example, if conflating layout (indenting) and grammar is not invertible and bijective, it may cause the aliasing error of not being able to view code in narrow screens (although afaics so far, Python's conflation of layout and grammar is invertible and bijective).
Based on recent essays from Guido, I tend to think he may still not understand.
Look I don't like when people call me out and I don't like to call others out, except when I try to help people and they ban me. I originally did not want to create Copute. I was making suggestions on how to improve HaXe (which is closest so far to what I want), and the creator of HaXe, Nicolas Cannasse banned me.
I am not trying to embarrass Guido, but I want to make it clear that I have real reasons for being forced to create my own language. I wanted somebody else to do it, but after 25 years, I got tired of waiting and pleading and being banned.
Pros and Cons of Python's Indenting
I will discuss these in order of least important and least controversial first.
1. NEUTRAL: Python contains no do-while, only has 'while'[ cited ]. This appears to be because the Python developers did not want to follow the (or the indention block that follows the) 'do' with a 'while' that ends with no colon (colon requires a following expression or block) for ideological consistency in Python's look-and-feel[ cited ]. They also considered a form with 'while:' (including the colon) that would have an optional following expression or block, but no where else in Python is a colon followed by "nothing"[ cited ] (i.e. "nothing" would I guess be a new line at same indent or outdented). Afaics, it would be possible to design Copute's grammar such that indention changes within expressions are consumed but ignored. And thus I don't see why the outdent to the 'while' couldn't be consumed within a compound do-while statement. The issue for Python was not context-free grammar related, rather ideological consistency. I think it would also be possible to have Copute implement that proposed optional block following 'while'[ cited ], which is known as "loop and a half" pattern[ cited ], but in Copute it would introduce the same ambiguity as dangling expression on 'return' (so semicolon is always required). In any case, do-while (even the complete "loop and a half" form) is not much less verbose than an equivalent construct using 'while-true' and 'if-break'[ cited ], so it is difficult to make a strong case for augmenting the traditional C-like 'do-while'.
2. NEUTRAL: Deeply nested (i.e. indented) code blocks can not be outdented, thus forcing horizontal scrolling[ cited ]. Outdenting such a case forces horizontal scrolling too. Besides the deep nested cases are usually more elegantly handled (and potentially more reusable too) with sub-functions and potentially recursion or standard list operations such a 'map', 'fold', etc.
3. CON: Python code could be rendered unusable when posted on a forum or web page that removes whitespace[ cited ]. HTML collapses juxtaposed whitespace into a single space by default. Although this can be classified as user error (i.e. not enclosing within <pre></pre> tags), languages that do not conflate indenting and grammar wouldn't be subject to this user error (Murphy's Law).
4. PRO: Impossible to obfuscate the meaning of a program by using bogus indentations[ cited ]. Note this must be combined with a line continuation token (e.g. '...' in Python), otherwise this would be simultaneously a PRO and a CON[ cited ]. Obfuscation means that one can imply in the indenting that the program is doing a different meaning, than what the program's grammar sees. I had given several examples of that within my prior post about correcting 6 ambiguities in other languages. Note that some people see obfuscation as a feature, i.e. to distribute JavaScript code that is so unreadable (i.e. entire program squashed into one line, all unnecessary spaces removed, etc) that it is not practical to steal the code. However, other forms of obfuscation, e.g. replacing identifiers with randomized words, is not prevented. And whitespace obfuscation is invertible and bijective to any preferred rendering of the grammar, so in theory someone could write an automated code restoration tool, thus the value of whitespace obfuscation is dubious.
5. CON: Tabs and spaces can not be safely mixed[ cited ], because the interpretation of the width of the tab character is not carried with all possible file types that can contain tabs. Thus if tabs and spaces are simultaneously allowed for indenting in same file (not allowed of Python 3), then indenting alignments (and thus meaning of the program) can be lost as code is copied around or opened in different tools. But languages which don't conflate indenting and grammar, have a similar but less serious (because only reader's meaning changes, not the actual execution meaning) problem with loss of indenting alignment when mixing tabs and spaces, which is they don't have #4 PRO above. Since Python 3 prevents mixing tabs and spaces for indenting, and since other languages also suffer from such mixing (thus they should also prevent it), then I would like to rate this as a NEUTRAL, but there is the problem that other tools can silently introduce tabs and user is faced with tsuris and they don't even know why or how to fix it[ cited ]. Also I want to make a related but slightly orthogonal point, that any tabs for indenting is bad, because indenting alignments can change when tab widths do. Thus I would advocate that tabs are never allowed for indenting, and this is even more critical for a language (e.g. Python) which conflates indentation and grammar and thus relies on indentation alignment for execution semantics.
6. CON: Having accidental superfluous space can silently change the meaning of code which follows[ cited ], e.g.
- Code:
if x:
.....print x
....print x
....if y:
........print y
Can you see that the first "print x" is one space indented more then the rest of the lines that follow? But will you notice it when it is buried in a page of 1000 lines of dense code? So the meaning of that code is actually as follows:
- Code:
if x:
print x
print x
if y:
print y
Consider this example:
- Code:
def myfunction(foo, bar):
....foo.boing()
...for i in bar.fizzle(foo):
......baz = i**2
....foo.wibble(baz)
....return foo, baz
It actually means:
- Code:
def myfunction(foo, bar):
foo.boing()
for i in bar.fizzle(foo):
baz = i**2
foo.wibble(baz)
return foo, baz
Conclusion and Decision
Even though I had intuitive theoretical misgivings about conflating whitespace and grammar before I wrote this post, I mildly appreciated Python's goal of consistent indenting of blocks without the pollution of braces. So I opened my mind to consider the pros and cons.
Upon evaluating the pros and cons, the human error (Murphy's Law) cons introduced are overwhelming, even the one pro case requires polluting the page with line continuation tokens ('...'). However, what really sealed my decision to not emulate Python's conflation, is I realized just now, that if Copute's grammar is well-defined with braces, then the user's editor can hide the braces when the indenting agrees, and/or the editor can enforce the indenting, not show the braces, and save the file with the appropriate braces. Thus all the cons of Python are avoided, the one pro is also achieved, and all the stated benefits of Guido's goal are achieved entirely.
So looks like I improved upon Guido's vision. :wink:
I seem to be serially good at that sort of "math visualization" success, perhaps that is why all the IQ tests say I am. I mean I would just like a little bit of mutual respect from my peers (as in not banning people who disagree with you, because they might be correct and you can't see it), not bragging (well maybe I am a little, but Copute is still vaporware).
=======
ADD: looks like Guido had a similar realization 13 years ago, but didn't act on it:
Last edited by Shelby on Sun Jul 24, 2011 4:25 am; edited 1 time in total
Benefits of referential transparency virally spreading into rest of design of Copute
1. Parametrized Class Inheritance
Here is a problem that Java is struggling with and even the experts can't seem to solve it succinctly, and Wikipedia can't even describe it coherently, but lookie here at Copute:
http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Inheritance
So that means for a pure function in Copute, there will absolutely no tsuris with parametrized classes that employ one level deep of inheritance. And no covariant inheritance tsuris with parametrized classes ever (which is the main type of inheritance that programmers expect intuitively).
2. Inferred Typing and Parametrized Functions
http://code.google.com/p/copute/issues/detail?id=27
What this means is that we never have to declare argument and return types for pure functions and we can still get complete static typing benefits without tsuris. We only need to declare any constraints (if any) on the argument and return types for pure functions, required by those functions.
Wow, I am seeing Copute gain more and more of the power of Haskell while still looking almost exactly like JavaScript (the most popular language in the world, because it is the script language in every web browser).
Here is a problem that Java is struggling with and even the experts can't seem to solve it succinctly, and Wikipedia can't even describe it coherently, but lookie here at Copute:
http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Inheritance
Covariant assignment, e.g. T<Super> = T<Sub>, is allowed when the type T is read-only [...] Contravariant assignment, e.g. T<Sub> = T<Super>, is also allowed when T is write-only.
So that means for a pure function in Copute, there will absolutely no tsuris with parametrized classes that employ one level deep of inheritance. And no covariant inheritance tsuris with parametrized classes ever (which is the main type of inheritance that programmers expect intuitively).
2. Inferred Typing and Parametrized Functions
http://code.google.com/p/copute/issues/detail?id=27
Function call expressions, which do not explicitly list the inferred types in their type parameter list, implicitly create polymorphic (i.e. potentially more than one) instantiations of a referentially transparent (aka pure) function declaration that does not contain 'typeof T', where T is an inferred type.
Whereas, function call expressions, which do not explicitly list the inferred types in their type parameter list, may only create one instantiation of a function declaration that either is referentially opaque (aka impure) or contains 'typeof T', where T is an inferred type. That implicit instantiation has the inferred type(s) of the first function call expression encountered by the compiler.
The point is that since referentially transparent (aka pure) functions have no side effects (not even internally retained state), there is no possible impact on the caller's state machine whether there one or more than one implicit instantiations of the function being called. Whereas, for example, given a function that saves an internal state (e.g. counter), then implicitly calling multiple instantiations would impact the caller's state machine differently than calling one instantiation. The ambiguity is because it is implicit, the programmer may not even be aware the code is calling more than one instantiation.
What this means is that we never have to declare argument and return types for pure functions and we can still get complete static typing benefits without tsuris. We only need to declare any constraints (if any) on the argument and return types for pure functions, required by those functions.
Wow, I am seeing Copute gain more and more of the power of Haskell while still looking almost exactly like JavaScript (the most popular language in the world, because it is the script language in every web browser).
Last edited by Shelby on Sat Feb 19, 2011 1:32 pm; edited 1 time in total
Covariant substitution is always due to immutability
> Shelby wrote:
>> Seems that some (many?) don't realize that what makes Haskell covariant
>> for parametrized subtyping is the referential transparency, or am I
>> mistaken?
>>
>> http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Inheritance
>> http://lambda-the-ultimate.org/node/735#comment-63943
>
> Haskell does not have covariance/contravariance because Haskell does not
> have sub/super typing.
I think I was correct (but maybe not?). Let me explain my big picture analysis.
I had written circa late 2009, "Base classes can be ELIMINATED with Interfaces":
http://www.haskell.org/pipermail/haskell-cafe/2009-November/068432.html
Haskell has implied "sub/super-typing" when the functions in a type are contained with the functions of another type, but types are only groups of functions (aka named interfaces) and never bundle/bind mutable data. And this hierarchal grouping is not required, as types may include any function, even those from any other types without requirement to include all the functions of another type.
Instead of bundling mutable data, where the functions of a type have overloads, or there exist conversion functions to those overloads, which input the data type (possibly a tuple), the data type is that type. Thus a data type can have multiple types and these are not restricted to being hierarchal.
It is critical to covariant substitution in Haskell that the data type be immutable, i.e. referentially transparent, meaning that the types (groups of functions) defined in Haskell can not modify the input data type. Imagine an array of numbers (floats, ints, fixed point, etc) and we want to define a type in Haskell (function) that accepts an array of ints and returns an array of numbers (perhaps this function adds an element, e.g. push()). The fact that in Haskell referential transparency insures that the returned array can never be referred to else where as an array of int (after potentially a non-int has been added to it), is why that function is allowed, i.e. covariant substitution is allowed because the input is always read-only due to referential transparency. The referential transparency is forcing the output to be a copy of the input.
Note also that in Haskell, data (which are always immutable in Haskell) are just functions that always return the same value.
So thus Haskell achieves this covariant substitution via all-or-nothing approach (caveat below) to referential transparency (aka purity).
The key to emulating Haskell's inheritance granularity, is simply to declare very granular interfaces in Copute, e.g. one added function per interface optimally, and do not include non-function member variables in those interfaces:
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
For example, instead of requiring the type Object for a function that just wants to input all types that have toString(), make Object inherit from an IToString interface that contains toString().
So thus we can see that Haskell's approach is not superior in fundamental power, rather it forces a "purer" design which may not be optimal in all cases. And furthermore, Haskell forces purity everywhere (well not entirely true, e.g. seq and state monad), and Copute allows both with (in theory) optimal granularity for the programmer to compose with. In an ideal world the programmer wants to strive for maximum purity (immutable data) and minimum referential opacity (minimize the state machine). But the real world has a state machine, the Dunbar number is evidence of that, as well as numerous fundamental theorems.
I think separation is the goal, and I observe Haskell is heavily tilted to pure code, at the cost of intuitive integration with the impure. Perhaps that is just a matter of personal preference, and my lack of experience with Haskell. In any case, I think many programmers will share my preference, as evident by Haskell's slow adoption for commercial applications. And lazy evaluation has a big cost (one which I submitted an idea for a solution).
>> Seems that some (many?) don't realize that what makes Haskell covariant
>> for parametrized subtyping is the referential transparency, or am I
>> mistaken?
>>
>> http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Inheritance
>> http://lambda-the-ultimate.org/node/735#comment-63943
>
> Haskell does not have covariance/contravariance because Haskell does not
> have sub/super typing.
I think I was correct (but maybe not?). Let me explain my big picture analysis.
I had written circa late 2009, "Base classes can be ELIMINATED with Interfaces":
http://www.haskell.org/pipermail/haskell-cafe/2009-November/068432.html
Haskell has implied "sub/super-typing" when the functions in a type are contained with the functions of another type, but types are only groups of functions (aka named interfaces) and never bundle/bind mutable data. And this hierarchal grouping is not required, as types may include any function, even those from any other types without requirement to include all the functions of another type.
Instead of bundling mutable data, where the functions of a type have overloads, or there exist conversion functions to those overloads, which input the data type (possibly a tuple), the data type is that type. Thus a data type can have multiple types and these are not restricted to being hierarchal.
It is critical to covariant substitution in Haskell that the data type be immutable, i.e. referentially transparent, meaning that the types (groups of functions) defined in Haskell can not modify the input data type. Imagine an array of numbers (floats, ints, fixed point, etc) and we want to define a type in Haskell (function) that accepts an array of ints and returns an array of numbers (perhaps this function adds an element, e.g. push()). The fact that in Haskell referential transparency insures that the returned array can never be referred to else where as an array of int (after potentially a non-int has been added to it), is why that function is allowed, i.e. covariant substitution is allowed because the input is always read-only due to referential transparency. The referential transparency is forcing the output to be a copy of the input.
Note also that in Haskell, data (which are always immutable in Haskell) are just functions that always return the same value.
So thus Haskell achieves this covariant substitution via all-or-nothing approach (caveat below) to referential transparency (aka purity).
The key to emulating Haskell's inheritance granularity, is simply to declare very granular interfaces in Copute, e.g. one added function per interface optimally, and do not include non-function member variables in those interfaces:
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
For example, instead of requiring the type Object for a function that just wants to input all types that have toString(), make Object inherit from an IToString interface that contains toString().
So thus we can see that Haskell's approach is not superior in fundamental power, rather it forces a "purer" design which may not be optimal in all cases. And furthermore, Haskell forces purity everywhere (well not entirely true, e.g. seq and state monad), and Copute allows both with (in theory) optimal granularity for the programmer to compose with. In an ideal world the programmer wants to strive for maximum purity (immutable data) and minimum referential opacity (minimize the state machine). But the real world has a state machine, the Dunbar number is evidence of that, as well as numerous fundamental theorems.
I think separation is the goal, and I observe Haskell is heavily tilted to pure code, at the cost of intuitive integration with the impure. Perhaps that is just a matter of personal preference, and my lack of experience with Haskell. In any case, I think many programmers will share my preference, as evident by Haskell's slow adoption for commercial applications. And lazy evaluation has a big cost (one which I submitted an idea for a solution).
Last edited by Shelby on Sat Feb 19, 2011 2:14 pm; edited 3 times in total
Programmers are getting miseducated about programming language grammar ambiguities
I just commented publicly ad nauseum on this:
http://en.wikipedia.org/w/index.php?title=Recursive_descent_parser&oldid=407998337#Shortcomings
In the general case, recursive descent parsers are not limited to the Context-free grammar and thus do no global search for ambiguities in the LL(k) parsing First_k and Follow_k sets. Thus ambiguities are not known until run-time if and until the input triggers them. Such ambiguities where the recursive descent parser defaults (perhaps unknown to the grammar designer) to one of the possible ambiguous paths, thus results in semantic confusion (aliasing) in the use of the language, and leads to bugs by users of ambiguous programming languages which are not reported at compile-time, and which are introduced not by human error, but by ambiguous grammar. The only solution which eliminates these bugs, is to remove the ambiguities and use a Context-free grammar.
http://en.wikipedia.org/w/index.php?title=Parser_combinator&oldid=407998210#Shortcomings_and_solutions
Parser combinators, like all recursive descent parsers, are not limited to the Context-free grammar and thus do no global search for ambiguities in the LL(k) parsing First_k and Follow_k sets. Thus ambiguities are not known until run-time if and until the input triggers them. Such ambiguities where the recursive descent parser defaults (perhaps unknown to the grammar designer) to one of the possible ambiguous paths, thus results in semantic confusion (aliasing) in the use of the language, and leads to bugs by users of ambiguous programming languages which are not reported at compile-time, and which are introduced not by human error, but by ambiguous grammar. The only solution which eliminates these bugs, is to remove the ambiguities and use a Context-free grammar.
http://www.codecommit.com/blog/scala/the-magic-behind-parser-combinators/comment-page-1#comment-5252
One major problem with recursive descent algorithms, such as parser combinators, is they do not do a LL(k) global search for First_k and Follow_k sets ambiguities at parser generation time. You won't actually know you have an ambiguity if and until you encounter it in the input at runtime. This is quite critical for developing a language:
http://members.cox.net/slkpg/documentation.html#SLK_FAQ
In the development of Copute's grammar, I found critical ambiguities that would have not been evident if I had gone with parser combinators (aka recursive descent algorithms). The tsuris I encountered in resolving ambiguities was due to incorrect grammar.
Also they will never be as faster, because for the k lookahead conflicts, they follow unnecessary paths, because the global optimization (look ahead tables) was not done.
I don't see what is the benefit? Perhaps it is just that the LL(k) parser generation tools are not written in good functional programming style in modern languages, thus making them difficult to adapt to and bootstrap in your new language.
Also just because the time spent of compilation semantic analysis of the AST is often much greater than the time spent in the parser stage, the speed of the parser stage is very important for JIT
compilation/interpreter.
Also I had looked at the recursive descent algorithms option and rejected it objectively.
The speedup in development time from not finding ambiguities will not make ambiguities disappear, because the grammar would still be ambiguous otherwise (backtracing makes a grammar ambiguous except in rare cases), and ambiguity results in semantic inconsistencies in the language, which get borne out as needless programming bugs that waste the hours of every programmer using the language. I have numerous examples of resolved issues at my Google code tracker for Copute.
So that speed up in development effort incurs a cost that is going to paid (probably more excruciatingly) down the line.
http://en.wikipedia.org/w/index.php?title=Recursive_descent_parser&oldid=407998337#Shortcomings
In the general case, recursive descent parsers are not limited to the Context-free grammar and thus do no global search for ambiguities in the LL(k) parsing First_k and Follow_k sets. Thus ambiguities are not known until run-time if and until the input triggers them. Such ambiguities where the recursive descent parser defaults (perhaps unknown to the grammar designer) to one of the possible ambiguous paths, thus results in semantic confusion (aliasing) in the use of the language, and leads to bugs by users of ambiguous programming languages which are not reported at compile-time, and which are introduced not by human error, but by ambiguous grammar. The only solution which eliminates these bugs, is to remove the ambiguities and use a Context-free grammar.
http://en.wikipedia.org/w/index.php?title=Parser_combinator&oldid=407998210#Shortcomings_and_solutions
Parser combinators, like all recursive descent parsers, are not limited to the Context-free grammar and thus do no global search for ambiguities in the LL(k) parsing First_k and Follow_k sets. Thus ambiguities are not known until run-time if and until the input triggers them. Such ambiguities where the recursive descent parser defaults (perhaps unknown to the grammar designer) to one of the possible ambiguous paths, thus results in semantic confusion (aliasing) in the use of the language, and leads to bugs by users of ambiguous programming languages which are not reported at compile-time, and which are introduced not by human error, but by ambiguous grammar. The only solution which eliminates these bugs, is to remove the ambiguities and use a Context-free grammar.
http://www.codecommit.com/blog/scala/the-magic-behind-parser-combinators/comment-page-1#comment-5252
One major problem with recursive descent algorithms, such as parser combinators, is they do not do a LL(k) global search for First_k and Follow_k sets ambiguities at parser generation time. You won't actually know you have an ambiguity if and until you encounter it in the input at runtime. This is quite critical for developing a language:
http://members.cox.net/slkpg/documentation.html#SLK_FAQ
In the development of Copute's grammar, I found critical ambiguities that would have not been evident if I had gone with parser combinators (aka recursive descent algorithms). The tsuris I encountered in resolving ambiguities was due to incorrect grammar.
Also they will never be as faster, because for the k lookahead conflicts, they follow unnecessary paths, because the global optimization (look ahead tables) was not done.
I don't see what is the benefit? Perhaps it is just that the LL(k) parser generation tools are not written in good functional programming style in modern languages, thus making them difficult to adapt to and bootstrap in your new language.
Also just because the time spent of compilation semantic analysis of the AST is often much greater than the time spent in the parser stage, the speed of the parser stage is very important for JIT
compilation/interpreter.
Also I had looked at the recursive descent algorithms option and rejected it objectively.
The speedup in development time from not finding ambiguities will not make ambiguities disappear, because the grammar would still be ambiguous otherwise (backtracing makes a grammar ambiguous except in rare cases), and ambiguity results in semantic inconsistencies in the language, which get borne out as needless programming bugs that waste the hours of every programmer using the language. I have numerous examples of resolved issues at my Google code tracker for Copute.
So that speed up in development effort incurs a cost that is going to paid (probably more excruciatingly) down the line.
Subtype semantic contract is typing
http://lambda-the-ultimate.org/node/1551#comment-64183
This is a reply to my post yesterday (posted 7am EST, 12pm EST midnight now, 12am noon for me in Asia), but it will not appear properly indented under my prior post, because I am unable to click "reply" to my prior post, because it hasn't yet appeared on this page (not yet approved by the moderator), not even visible to me privately while logged into LtU.
Shelby Moore wrote:
Let me show the derivation of those above assertions.
LSP states that a property is inherited for all subset(s) when the inherited property is provable for the superset.
LSP thus states it is generally undecidable that subsets inherit semantics. This is due to the Linsky Referencing principle which says that it is undecidable what something is when it is described or perceived, i.e. the static typing compiler can enforce the relationships between the declared types-- not algorithms applied to the variants of the interface. Thus, type of a method function (aka interface) is proven to inherit at compile-time by the static typing compiler, but the semantics derived from that inherited interface type are undecidable/unprovable.
Shelby Principles on LSP
This is a reply to my post yesterday (posted 7am EST, 12pm EST midnight now, 12am noon for me in Asia), but it will not appear properly indented under my prior post, because I am unable to click "reply" to my prior post, because it hasn't yet appeared on this page (not yet approved by the moderator), not even visible to me privately while logged into LtU.
Shelby Moore wrote:
...the generative essence, which is that granularity of types is what determines the tension.
Afaics, the importance of typing is to enforce semantic bounds at compile-time (i.e. locality of concerns), to avoid proliferating run-time "exceptions" (misbehavior of any degree)...
...while getting rid of virtual inheritance on non-abstract classes avoids the Liskov Substitution Principle problem...
Let me show the derivation of those above assertions.
LSP states that a property is inherited for all subset(s) when the inherited property is provable for the superset.
LSP thus states it is generally undecidable that subsets inherit semantics. This is due to the Linsky Referencing principle which says that it is undecidable what something is when it is described or perceived, i.e. the static typing compiler can enforce the relationships between the declared types-- not algorithms applied to the variants of the interface. Thus, type of a method function (aka interface) is proven to inherit at compile-time by the static typing compiler, but the semantics derived from that inherited interface type are undecidable/unprovable.
Shelby Principles on LSP
- In order to strengthen the semantic design contract, it has been proposed to apply preconditions and postconditions on the variants of the interface. But conceptually such conditions are really just types, and can be so in practice. Thus, granularity of typing is what determines the boundary of semantic undecidability and thus given referential transparency then also the boundary of tension for reusability/composablity. Without referential transparency, granularity increases the complexity of the state machine and this causes the semantic undecidability to leak out (alias) into the reuse (sampling of inheritance) state machine (analogous to STM, thread synchronization, or other referentially opaque paradigms leaking incoherence in concurrency). Coase's theorem (i.e. there is no external reference point, any such barrier will fail) predicts the referencial dependency failure, which is really just the Shannon-Nyquist sampling theorem (i.e. aliasing occurs unless one samples for infinite time and infinite granularity in space-time, Nyquist limit can't be known due to Coase's theorem) and the 1856 law of thermodynamics (i.e. entire universe, a closed system thus everything, trends to maximum disorder, where disorder means maximum possibilities or granularity).
- Thus I disagree with current (afaik) state-of-the-art in literature that claims LSP allows that interface arguments inheritance are contravariant and return values are covariant. Afaics, they must be invariant in order to inherit the same semantics on the interface, unless the variance is between 100% abstract (no mixed semantic implementation) types. What someone was probably thinking is that those rules for variance on the interface inheritance enables the subtype to fulfill LSP, where the property that holds true for the superset (supertype) is that each member (subtype) obeys the order of the declared inheritance hierarchy, but that is a very weak property (semantics of order of hierarchy doesn't do anything to enforce semantic behavior of interface, e.g. inheritance order does not prevent a subtype CIntersection from silently ignoring duplicate adds to a supertype CUnion, whereas a boolean return value for success does). Note, differentiate between interface inheritance and invocation. The invocation of an interface allows the inverse of such variance, but that is an unrelated issue (if the interface inheritance is invariant and thus LSP correct).
- For each supertype method that does not have a semantic implementation, by definition it is impossible for it to semantically deviate from its implemented subset (surjectively, but the subset members can deviate from each other), thus can be invoked as a virtual method with semantic type-safety (without violating LSP) when it is called on a compile-time (aka statically typed) reference to that supertype (even though the reference points at run-time to a subtype with an implementation of that method, because an abstract type can not be instantiated). By definition, any type that has an incomplete implementation of its interface(s), is an abstract type. Note for a method that has an implementation, then it is not semantically type-safe (violates LSP per my prior paragraphs) if called virtually on a compile-time reference to the type (even abstract) that contains that implementation. Thus perhaps one can make a good rational for not mixing implementation in an abstract class, so if abstract class names are transparent to the programmer it is always clear when semantics is undefined at compile-time and virtual at run-time, and thus to force separation (granularity). One of my critical general design rules (to everything, including social science) is that implicit should never be opaque, meaning implicit constructs should never create hidden ambiguity.
Concurrency: How we will program for computers with 1000+ processors
http://lambda-the-ultimate.org/node/4182
Shelby wrote:
http://lambda-the-ultimate.org/node/4178#comment-64108
Shelby wrote:
Shelby wrote:
My data structure for Steele's word splitting example is, where enum is an algebraic type (which is just a syntactical sugar for class inheritance), e.g.
- Code:
enum Segment
{
char( s : String )
undelimited( left : Segment, right : Segment, words : Array<String> )
delimitedLeft( left : Segment, right : Segment, words : Array<String> )
delimitedRight( left : Segment, right : Segment, words : Array<String> )
delimitedBoth( left : Segment, right : Segment, words : Array<String> )
function combinator( left : Segment, right : Segment ) : Segment
{
// Insert logic here to return a new Segment that combines left and right
// words will contain incomplete words on one or both ends of the ends of the array unless Segment.delimitedBoth
}
}
The input is Array<Segment>, probably Array<Segment.char> and the covariant substitution is legal because Segment.combinator is referentially transparent. Then map reduce this over Segment.combinator. If Segment.combinator is associative, call an associative version of map reduce. Ditto if Segment.combinator is commutative.
I am also thinking that for example, images should be defined by a recursive hierarchical algebraic type that has resonant locality in the 2D space (e.g. to JPEG 8x8 DCT blocks), instead of non-resonant locality in the 1D space of an Array<Color>, e.g.
- Code:
enum Image
{
pixel( c : Color )
block( topLeft : Image, topRight : Image, btmLeft : Image, btmRight : Image )
function combinator( topLeft : Image, topRight : Image, btmLeft : Image, btmRight : Image ) : Block
{
return new Image.block( topLeft, topRight, btmLeft, btmRight )
}
}
Note the Image.combinator is associative, but in a 2D sense, thus we need a 2D version of map reduce. Map reduce is the constructor for the instance of a recursive hierarchical algebraic type.
Note Image could be parametrized on type of a Image.pixel instead of married to Color.
http://lambda-the-ultimate.org/node/4178#comment-64108
Shelby wrote:
Compiled vs. library?
Afaics, the key premise this research hinges on, is that some important higher-level domain specific portions of optimizations must be done at compile time.
However, if it turns out that the optimizations are really just lower-level general optimizations such as optimizing matrix math, combined with remapping data structures from accumulator and/or random access, to recursive, hierarchical, with associative functional map reduce constructions (see my post at bottom of that link, no direct link yet, as post is awaiting moderator approval), then it may turn out that domain specific optimizations are just run-time libraries that enforce certain data structures.
My intuition leans towards Tim's outcome being most likely, because of economy-of-scale, and tackling the lowest hanging fruit first. Nature prefers the multifurcating tree of possibilities, because it is the most economic (fluid dynamics through pipes). Examples include the internet physical network, and the human brain.
Can anyone provide examples of domain specific optimizations that must be done at compile-time and couldn't be restructured in libraries of the paradigm of concurrent data structures that I linked about above?
P.S. afaik parser combinators (non-predictive recursive descent) don't prove lack of ambiguity in grammar, because the global search of Firstk and Followk sets has not been enumerated.
Last edited by Shelby on Sat Feb 19, 2011 1:21 pm; edited 1 time in total
Ehud Lamm, have you censored my 2 latest posts at LtU
Mr Ehud Lamm (editor of "LtU" site),
It has been 24 hours since I submitted at your LtU site, two insightful posts on the future of theoretical data structures for parallel computing, I have made a copy of my posts at two blogs:
https://goldwetrust.forumotion.com/t112p90-computers#4061
Those posts have not appeared at your site. I request an explanation of why the posts have apparently been censored?
I don't really know how to express this, so here it goes as best as I can within a few minutes that I have to allocate to this...
Perhaps you don't understand that those two posts are about the theory of the future of data structures. I was not arguing about design issues, but rather pointing out that algebraic types can be used to create data structures that are resonant with concurrency. I was pointing out that Steele's explanation of the data structure cluttered that understanding. Steele's introduction of a middle chunk that could contain both delimiters and words, was attacking the problem from the top-down, which is the wrong way to think about concurrent data structures. Concurrency is a bottom up map reduce construction. That is a very fundamental point.
It is like this. I was trying to participate in your site, to see if there was an insight I could gain from others about my ideas. I felt others might a gain from my insights and I might gain from their responses. I felt I was giving up some of my key commercial insights, but I felt it was worth it in the spirit of better outcomes for mankind.
However, I will just proceed without the benefit of sharing. It is going to quite ironic if Copute becomes extremely popular and is acclaimed for solving some key issues in theoretical programming, and then I point out that Ehud and LtU was censoring my attempts to share my insights with other researchers.
God gave you a talent which is at least your key site and positioning in the field. As you know from the Parable of the Talents, if you misuse the talent, it will be take from you and given to someone who is able to better use the talent.
LtU is a very useful site, and I wouldn't propose to say that all of your talent is being wasted. However, I also see that LtU is in some respects a lot of noise with very few key generative essence realized. This is where my IQ really stands up.
Again, if I have simply misunderstood and I am wrong in some aspect, I would appreciate hearing it. I don't want to remain ignorantly overconfident. Shoot me down, if you can. Please.
Thanks,
Shelby
It has been 24 hours since I submitted at your LtU site, two insightful posts on the future of theoretical data structures for parallel computing, I have made a copy of my posts at two blogs:
https://goldwetrust.forumotion.com/t112p90-computers#4061
Those posts have not appeared at your site. I request an explanation of why the posts have apparently been censored?
I don't really know how to express this, so here it goes as best as I can within a few minutes that I have to allocate to this...
Perhaps you don't understand that those two posts are about the theory of the future of data structures. I was not arguing about design issues, but rather pointing out that algebraic types can be used to create data structures that are resonant with concurrency. I was pointing out that Steele's explanation of the data structure cluttered that understanding. Steele's introduction of a middle chunk that could contain both delimiters and words, was attacking the problem from the top-down, which is the wrong way to think about concurrent data structures. Concurrency is a bottom up map reduce construction. That is a very fundamental point.
It is like this. I was trying to participate in your site, to see if there was an insight I could gain from others about my ideas. I felt others might a gain from my insights and I might gain from their responses. I felt I was giving up some of my key commercial insights, but I felt it was worth it in the spirit of better outcomes for mankind.
However, I will just proceed without the benefit of sharing. It is going to quite ironic if Copute becomes extremely popular and is acclaimed for solving some key issues in theoretical programming, and then I point out that Ehud and LtU was censoring my attempts to share my insights with other researchers.
God gave you a talent which is at least your key site and positioning in the field. As you know from the Parable of the Talents, if you misuse the talent, it will be take from you and given to someone who is able to better use the talent.
LtU is a very useful site, and I wouldn't propose to say that all of your talent is being wasted. However, I also see that LtU is in some respects a lot of noise with very few key generative essence realized. This is where my IQ really stands up.
Again, if I have simply misunderstood and I am wrong in some aspect, I would appreciate hearing it. I don't want to remain ignorantly overconfident. Shoot me down, if you can. Please.
Thanks,
Shelby
Last edited by Shelby on Thu Jan 20, 2011 3:15 am; edited 1 time in total
Ehud graciously replied, so I replied again
> Dear Shelby Moore,
>
> I should make it clear from the start that posting on LtU is not a
> right but a privilege? and that posting is entirely at my discretion.
>
> You have been posting very frequently recently, and posting very long
> messages. These message do not seem to be of interest to the
> community, and they have not lead to any replies as of yet. Your two
> posts that are being held for moderation are way too long based on the
> best practices of the site as I judge them. If and when I will allow
> them to appear will be based on my judegment of their interest to the
> community.
>
> Given the significance you precieve in them, I urge you to try to
> publish your results in peer-reviewed literature. LtU is certainly not
> aimed at publishing new results (feel free to consult the policy
> document). If you do not wish to go this route, there are many ways to
> self-publish on the internet.
>
> Best regards and good luck,
> Ehud
Dear Ehud Lamm,
Thank you for the reply. I should reciprocate with my honest reply.
For the public record, I will enumerate your accusations, as we are called to bear witness:
1. I have posted 4 times between Jan 6 and Jan 17:
http://lambda-the-ultimate.org/tracker/7621
If you had not censored my 2 posts, that would have been 6 posts between Jan 6 and 18, which is a whopping 0.5 posts per day. Wow that is really too many?
Yet, we see you do not censor others who have posted much more frequently such as yourself with 12 posts between Jan 6 and 18:
http://lambda-the-ultimate.org/user/1/track
And many other examples, such as this user that posted 17 times between Jan 6 and 18:
http://lambda-the-ultimate.org/user/6002/track
Even this one page alone has for numerous posts per user, by numerous users per day, in which the real-time dialogue also evidences that you don't have them on moderation as you do me (so that adds circumstantial evidence that it is something personal against me):
http://lambda-the-ultimate.org/node/4176
2. My first post received a reply, and I rebutted it with my 2nd post, and there was no more replies because my rebuttal was irrefutable fact:
http://lambda-the-ultimate.org/node/735#comment-63943
So how can you assume that my posts are not generating interest in the community, when 2 of my posts did obviously (surely the person I rebutted would have rebutted me if I was not correct in the 2nd post where I rebutted him)? You have a whopping sample size of 4 posts, with 2 posts demonstating community interest. I assume you know what standard deviation means, so then I can only assume that you are being intentionally facetious to the extreme.
Notwithstanding the statistical void of a sample size of 4, of which my demonstrated community interest was 25 - 50%, how did you measure that my 3rd and 4th posts were not so overwhelmingly accepted a fact, that the community appreciated them but did not see a need to comment further? Did you have non-public discussions?
3. You published 2 of my short-to-medium size posts and 2 of my longish posts. I see that others have made long posts at times. I also see that the 2 posts you censored were shorter than the 2 longish ones of mine that you did not censor. Also 1 of the posts you censored was very short and the shortest all of my 6 attempted posts from Jan 6 to Jan 18:
https://goldwetrust.forumotion.com/t112p90-computers#4061
If the length of my posts is/was the issue, it would be very simple for me to edit my posts (if they were posted) and provide a link to the same information off-site, and provide only a terse summary at LtU. I have no extreme need for my posts to appear at LtU, I won't force the issue where I am not wanted.
4. About the privilege versus right issue, I have read the public FAQ and policy documents, and I see I have not violated the policies, nor afaics do any of your accusations hold any objective truth, so I assume you saying that privilege is entirely arbitrary based on your personal judgment of a person's qualities other than the objective quality of their contribution. So in other words, it seems you are implying that LtU is not a professional site (contrary to the specific claim that it is for professionals in your FAQ and policy documents), open to objective peer review, rather it is private club for subjective, closed opinion.
http://lambda-the-ultimate.org/faq
"Your contributions are welcome, in the form of questions, announcements etc. However, abusive or off-topic posts will be deleted immediately."
"Keep in mind that LtU is a community site and regular and respected members are expected to let posters know when their posts violate the spirit of LtU. If you receive responses of this sort, it is firmly suggested that you review your contribution, and accept that your style of discussion or choice of topic may be inappropriate for this site. Rest assured that this community moderation will not be used casually. In the unlikely chance that you feel this happens, and this somehow goes unnoticed by the community at large, feel free to let me know how you feel and any other concerns you might have.
I'd be happy to have many folks contributing to the site, so if you read LtU regularly, participate in the discussion group and are interested in becoming a contributing editor and posting items to the homepage - just let me know."
I did not see any community members giving my current username any negative responses about my style of contribution. I assume you believe in the Jubilee, forgiveness, or the ability of people to learn and adjust.
http://lambda-the-ultimate.org/policies
"LtU is foremost a place to learn and exchange ideas. The LtU Forum is not a debating forum for advocacy, posturing, attacks, vendettas, or advertising. It is a forum for informed professional discussion related to existing work.
Your contributions are welcome, subject to the policies described below. Abusive or off-topic posts will be deleted immediately. Posting here is a privilege, not a right.
Note that these policies were developed mainly to help new members understand the site, and to help maintain a high quality of discussion."
5. Please don't talk to me facetiously as if I am child, which I am obviously not. Considering I wrote a web page publishing tool (Cool Page) 13 years ago, which over a million people used to publish their own web sites, I certainly know how to publish my ideas, and I had already informed you that I had already done so to two blogs.
Honestly I really don't care if you publish my posts, it is more of point about principle. I told you before that I was doing it out of the unselfishness of my heart to share with others for their benefit, with the possibility for me to get the benefit of feedback.
I urge you (not facetiously) to revisit your Torah and the values you were I assume taught but maybe have forgotten (but I can't know or judge because I am not inside your mind and heart):
http://www.torah.org/learning/jewish-values/archives.html
For example, we must judge fairly. We must not steal (or waste) the time of others. Again referring to the link above, it talks about "Returning lost objects". My time and posts are lost. "Honoring others", "Distancing yourself from falsehood", etc..
I hereby apologize for my sins and ask for forgiveness. I probably don't fully know to what degree my intentions were imperfect, but I do feel I had a genuine desire in my heart to share and entertain synergy for good.
I can not judge you sir, so let us end here. All the best,
Shelby
>
> I should make it clear from the start that posting on LtU is not a
> right but a privilege? and that posting is entirely at my discretion.
>
> You have been posting very frequently recently, and posting very long
> messages. These message do not seem to be of interest to the
> community, and they have not lead to any replies as of yet. Your two
> posts that are being held for moderation are way too long based on the
> best practices of the site as I judge them. If and when I will allow
> them to appear will be based on my judegment of their interest to the
> community.
>
> Given the significance you precieve in them, I urge you to try to
> publish your results in peer-reviewed literature. LtU is certainly not
> aimed at publishing new results (feel free to consult the policy
> document). If you do not wish to go this route, there are many ways to
> self-publish on the internet.
>
> Best regards and good luck,
> Ehud
Dear Ehud Lamm,
Thank you for the reply. I should reciprocate with my honest reply.
For the public record, I will enumerate your accusations, as we are called to bear witness:
1. I have posted 4 times between Jan 6 and Jan 17:
http://lambda-the-ultimate.org/tracker/7621
If you had not censored my 2 posts, that would have been 6 posts between Jan 6 and 18, which is a whopping 0.5 posts per day. Wow that is really too many?
Yet, we see you do not censor others who have posted much more frequently such as yourself with 12 posts between Jan 6 and 18:
http://lambda-the-ultimate.org/user/1/track
And many other examples, such as this user that posted 17 times between Jan 6 and 18:
http://lambda-the-ultimate.org/user/6002/track
Even this one page alone has for numerous posts per user, by numerous users per day, in which the real-time dialogue also evidences that you don't have them on moderation as you do me (so that adds circumstantial evidence that it is something personal against me):
http://lambda-the-ultimate.org/node/4176
2. My first post received a reply, and I rebutted it with my 2nd post, and there was no more replies because my rebuttal was irrefutable fact:
http://lambda-the-ultimate.org/node/735#comment-63943
So how can you assume that my posts are not generating interest in the community, when 2 of my posts did obviously (surely the person I rebutted would have rebutted me if I was not correct in the 2nd post where I rebutted him)? You have a whopping sample size of 4 posts, with 2 posts demonstating community interest. I assume you know what standard deviation means, so then I can only assume that you are being intentionally facetious to the extreme.
Notwithstanding the statistical void of a sample size of 4, of which my demonstrated community interest was 25 - 50%, how did you measure that my 3rd and 4th posts were not so overwhelmingly accepted a fact, that the community appreciated them but did not see a need to comment further? Did you have non-public discussions?
3. You published 2 of my short-to-medium size posts and 2 of my longish posts. I see that others have made long posts at times. I also see that the 2 posts you censored were shorter than the 2 longish ones of mine that you did not censor. Also 1 of the posts you censored was very short and the shortest all of my 6 attempted posts from Jan 6 to Jan 18:
https://goldwetrust.forumotion.com/t112p90-computers#4061
If the length of my posts is/was the issue, it would be very simple for me to edit my posts (if they were posted) and provide a link to the same information off-site, and provide only a terse summary at LtU. I have no extreme need for my posts to appear at LtU, I won't force the issue where I am not wanted.
4. About the privilege versus right issue, I have read the public FAQ and policy documents, and I see I have not violated the policies, nor afaics do any of your accusations hold any objective truth, so I assume you saying that privilege is entirely arbitrary based on your personal judgment of a person's qualities other than the objective quality of their contribution. So in other words, it seems you are implying that LtU is not a professional site (contrary to the specific claim that it is for professionals in your FAQ and policy documents), open to objective peer review, rather it is private club for subjective, closed opinion.
http://lambda-the-ultimate.org/faq
"Your contributions are welcome, in the form of questions, announcements etc. However, abusive or off-topic posts will be deleted immediately."
"Keep in mind that LtU is a community site and regular and respected members are expected to let posters know when their posts violate the spirit of LtU. If you receive responses of this sort, it is firmly suggested that you review your contribution, and accept that your style of discussion or choice of topic may be inappropriate for this site. Rest assured that this community moderation will not be used casually. In the unlikely chance that you feel this happens, and this somehow goes unnoticed by the community at large, feel free to let me know how you feel and any other concerns you might have.
I'd be happy to have many folks contributing to the site, so if you read LtU regularly, participate in the discussion group and are interested in becoming a contributing editor and posting items to the homepage - just let me know."
I did not see any community members giving my current username any negative responses about my style of contribution. I assume you believe in the Jubilee, forgiveness, or the ability of people to learn and adjust.
http://lambda-the-ultimate.org/policies
"LtU is foremost a place to learn and exchange ideas. The LtU Forum is not a debating forum for advocacy, posturing, attacks, vendettas, or advertising. It is a forum for informed professional discussion related to existing work.
Your contributions are welcome, subject to the policies described below. Abusive or off-topic posts will be deleted immediately. Posting here is a privilege, not a right.
Note that these policies were developed mainly to help new members understand the site, and to help maintain a high quality of discussion."
5. Please don't talk to me facetiously as if I am child, which I am obviously not. Considering I wrote a web page publishing tool (Cool Page) 13 years ago, which over a million people used to publish their own web sites, I certainly know how to publish my ideas, and I had already informed you that I had already done so to two blogs.
Honestly I really don't care if you publish my posts, it is more of point about principle. I told you before that I was doing it out of the unselfishness of my heart to share with others for their benefit, with the possibility for me to get the benefit of feedback.
I urge you (not facetiously) to revisit your Torah and the values you were I assume taught but maybe have forgotten (but I can't know or judge because I am not inside your mind and heart):
http://www.torah.org/learning/jewish-values/archives.html
For example, we must judge fairly. We must not steal (or waste) the time of others. Again referring to the link above, it talks about "Returning lost objects". My time and posts are lost. "Honoring others", "Distancing yourself from falsehood", etc..
I hereby apologize for my sins and ask for forgiveness. I probably don't fully know to what degree my intentions were imperfect, but I do feel I had a genuine desire in my heart to share and entertain synergy for good.
I can not judge you sir, so let us end here. All the best,
Shelby
Fundamental outline of Copute
http://copute.com/dev/docs/Copute/ref/function.html
Without the powerful static typing and pure function options, Copute is essentially the same grammar as JavaScript.
The Copute language is composed of five fundamentals, type, instance reference, expression, function, and imperative scope.
A Copute source code file is implicitly wrapped in an anonymous function that is called at initialization.
Without the powerful static typing and pure function options, Copute is essentially the same grammar as JavaScript.
The Copute language is composed of five fundamentals, type, instance reference, expression, function, and imperative scope.
- Type is declared by a class statement, enum statement, inseparably in a function (instance construction) expression, or the identifier associated with the aforementioned declarations (when not anonymous).
- Instance construction is declared by a class or enum constructor call expression, a function expression, or literal class expression-- all of which return an instance reference.
- Expression constructs an instance or operates on instance reference(s), and returns an instance reference or void.
- Functional programming is a function call expression, that may optionally nest (a hierarchy of) function call expression. A referentially transparent (aka pure) function is re-entrant, stateless, partial evaluation agnostic, and thus composable (aka reusable).
- Imperative (aka stateful, or state-machine) programming is an ordered sequence of expression(s). Each imperative sequence (in a nested hierarchy of them) is an identifier namespace (aka scope)-- a means to granularize the referential opacity. A statement is a grammatical unit which forms an expression that returns a type of void.
A Copute source code file is implicitly wrapped in an anonymous function that is called at initialization.
Steven Obua just described my Copute project, very substantially
http://lambda-the-ultimate.org/node/4182#comment-64249
I will have a look at Steven Obua's current work.
I will be emailing the following to Steven Obua.
Okay in 20 minutes, I reviewed his new computer language Babel-17, his recent research paper, and the expert criticisms he received:
http://arxiv.org/PS_cache/arxiv/pdf/1007/1007.3023v1.pdf
http://phlegmaticprogrammer.wordpress.com/2010/11/21/response-to-reviews/
http://phlegmaticprogrammer.wordpress.com/2010/11/21/reviews-for-purely-functional-structured-programming/
He is aiming to achieve referential transparency (pure) in an structure language style, that looks a lot like the imperative (stateful) code that is familiar to many programmers, but is actually selectively pure externally, by selectively not allowing references to see the external scope. He accomplishes this by "shadowing", which means hiding an external scope reference by declaring another instance with the same identifier in the local scope. This means some references could see still the external scope if they were not also hidden, thus his design is very granular in that respect (but I think granular in an undesirable way if not coupled with some additional semantics, as I will explain below). JavaScript has this selective hiding capability now, and so does the design of Copute. As the expert reviewers point out, this is not a new concept. The hard part is how to get programmers to use it for purity.
His marketing objective is related to mine, in that he wants to enable the integration of structured imperative programming with pure functional programming, in a more familiar and "less mathematical" (more intuitive or natural) semantics than the Haskell monad (for the average non-mathematical programmer).
However, I think he falls far short of what I am doing with the design of Copute. Afaics, the key problem with his design, is there is no structured and explicit way to define the boundaries of which functions are referentially transparent and which are not. It is the composition of pure functions that enables reusability to scale. This is the same criticism I make against the Haskell monad, where implicit typing and the ad-hoc polymorphism typing system allow any type to cross-pollute another, means that there are no concrete, explicit boundaries on purity (and on semantics and thus type-safety in general). I wrote about that at the following link.
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
It is really the typing system that enables scalable composability (his Babel-17 is not typed, so it is hopeless as the expert reviewers point out), and this is why Copute puts so much effort into getting the purity rules for type variance (inheritance) correct, and type is how we will parallelize our future (note the post at the following link was censored from LtU).
https://goldwetrust.forumotion.com/t112p90-computers#4061
So in summary, he is correct that Scala missed the boat on purity (but Scala creator Odersky has stated that this is because of the challenge of integrating with the rest of Java world), but we need the correct typing system in order to achieve parallelism. I think Steven Obua is starting to realize that, but he is still delegating to inference in the compiler instead of typing, which is incorrect because the programmer has to think in terms of naturally concurrent data structures:
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/
http://lambda-the-ultimate.org/node/4182#comment-64170
http://lambda-the-ultimate.org/node/4182#comment-64227
I will have a look at Steven Obua's current work.
I will be emailing the following to Steven Obua.
Okay in 20 minutes, I reviewed his new computer language Babel-17, his recent research paper, and the expert criticisms he received:
http://arxiv.org/PS_cache/arxiv/pdf/1007/1007.3023v1.pdf
http://phlegmaticprogrammer.wordpress.com/2010/11/21/response-to-reviews/
http://phlegmaticprogrammer.wordpress.com/2010/11/21/reviews-for-purely-functional-structured-programming/
He is aiming to achieve referential transparency (pure) in an structure language style, that looks a lot like the imperative (stateful) code that is familiar to many programmers, but is actually selectively pure externally, by selectively not allowing references to see the external scope. He accomplishes this by "shadowing", which means hiding an external scope reference by declaring another instance with the same identifier in the local scope. This means some references could see still the external scope if they were not also hidden, thus his design is very granular in that respect (but I think granular in an undesirable way if not coupled with some additional semantics, as I will explain below). JavaScript has this selective hiding capability now, and so does the design of Copute. As the expert reviewers point out, this is not a new concept. The hard part is how to get programmers to use it for purity.
His marketing objective is related to mine, in that he wants to enable the integration of structured imperative programming with pure functional programming, in a more familiar and "less mathematical" (more intuitive or natural) semantics than the Haskell monad (for the average non-mathematical programmer).
However, I think he falls far short of what I am doing with the design of Copute. Afaics, the key problem with his design, is there is no structured and explicit way to define the boundaries of which functions are referentially transparent and which are not. It is the composition of pure functions that enables reusability to scale. This is the same criticism I make against the Haskell monad, where implicit typing and the ad-hoc polymorphism typing system allow any type to cross-pollute another, means that there are no concrete, explicit boundaries on purity (and on semantics and thus type-safety in general). I wrote about that at the following link.
http://copute.com/dev/docs/Copute/ref/class.html#Inheritance
It is really the typing system that enables scalable composability (his Babel-17 is not typed, so it is hopeless as the expert reviewers point out), and this is why Copute puts so much effort into getting the purity rules for type variance (inheritance) correct, and type is how we will parallelize our future (note the post at the following link was censored from LtU).
https://goldwetrust.forumotion.com/t112p90-computers#4061
So in summary, he is correct that Scala missed the boat on purity (but Scala creator Odersky has stated that this is because of the challenge of integrating with the rest of Java world), but we need the correct typing system in order to achieve parallelism. I think Steven Obua is starting to realize that, but he is still delegating to inference in the compiler instead of typing, which is incorrect because the programmer has to think in terms of naturally concurrent data structures:
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/
http://lambda-the-ultimate.org/node/4182#comment-64170
http://lambda-the-ultimate.org/node/4182#comment-64227
re: Steven Obua just described my Copute project, very substantially
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-92
Hi Steven, thanks for the clarification. I agree, a public discussion is preferred. I didn't want to create Copute (and I don't even have a working compiler yet), but I need a better mainstream language, and I got tired of begging others to do it and waiting. I don't even think I am the most qualified to do it (I'm historically an applications programmer learning to become a language research + designer since 2009), so here I am. So it is worthwhile if there is anything we can learn from each other, and share publicly. In short, I appreciate the discussion, because I don't want to make a design error or waste my effort "barking up the wrong tree".
If I understand correctly, per your clarification Babel-17 is making all functions pure, but I asserted my understanding in my prior post that within pure functions, the structured code is granularly opaque by employing per data instance shadowing. That does not make the containing function impure, so that is fine (and Copute and JavaScript can do that too, but JavaScript can't assert that the function is pure and closures are even opaque). One proposed difference for Copute, is the function is only a referentially transparent boundary if it is declared to enforced as pure.
If I understand correctly that we share the goal of facilitating integration/interoption of (and transition between) imperative and pure functional programming, then why would we not need both impure and pure functions?
My understanding is that programs are more than just functions, they are compositions of rich semantic paradigms which can be declared with typing. And life is not entirely entirely referentially transparent. For example, the Observer pattern requires a callback (external state), thus it can never be a referentially transparent construction. However, in an idealized world, we can invert the Observer pattern as Functional Reactive (FPR):
http://www.mail-archive.com/haskell-cafe@haskell.org/msg66898.html
http://www.haskell.org/haskellwiki/Phooey
FPR is not theoretically less efficient, because it could be optimized to only recompute the portions of the global FPR chain that have changed, thus it isn't different from the propagation of state dependencies in the Observer pattern in that respect. In both Observer and FPR, the order of propagation of state change can be coded indeterminate or deterministically. However, in some respects Observer pattern is easier, because one can just slap on any where, without too much concern for overall design (however, this sloppiness will manifest as race conditions and other out-of-order situations, etc).
So I will not argue theoretically that it is impossible to make every possible program a composition of referentially transparent functions (that where we want to be); however, in practice the transition from where we are today in the computer world to the future, will be eased if one can use a referentially opaque function sometimes. Sometimes "quick and dirty" is what gets the job done and often what gets the job done, is what is popular. So my idea with Copute was to make the transition to pure functional programming as familiar and painless as possible...and in my case to users of afaik the most popular computer language in the world, JavaScript. Also because then I will have a ready market since afaik there is no good pure FP compiler with great typing system, and optional integrated dynamic typing, that outputs JavaScript? (I started by studying HaXe strengths and weaknesses, then I learned Haskell, etc).
Back to the more fundamental theory point. Although we want to have programs which are composed of entirely referentially transparent (i.e. pure) functions, we encounter gridlock where we need to refactor up a tree of pure FP code, if some function on a branch wasn't granular enough (i.e. conflated some data which really should be orthogonal). So eventually our ideal world of pure FP every where becomes untenable-- in short, it can't scale in a wide area composition. It is sort of analogous to C++ "const" blunder, which had to propagate every where in order to be used any where.
Thus we are likely to use pure FP for problem spaces that are well encapsulated, but we will continue to use imperative coding for more dynamic social integration. Thus it seemed to, critical that our typing system will make these boundaries transparent (explicit, not silent inferred polymorphism).
Do you have any comments that could spur another round of exchange? Or does this just seem wrong or irrelevant from your perspective? I am eager to learn from any one who is willing to share. Thanks.
===========
ADD: Coase's theorem applies, i.e. that there is no external reference point, thus all boundaries will fail (be subverted by the free market of the fact that the universe is trending to maximum disorder). Thus the all pure FP or nothing boundary at the function, is not realistic. It is not in harmony with thermodynamics and entropy.
Hi Steven, thanks for the clarification. I agree, a public discussion is preferred. I didn't want to create Copute (and I don't even have a working compiler yet), but I need a better mainstream language, and I got tired of begging others to do it and waiting. I don't even think I am the most qualified to do it (I'm historically an applications programmer learning to become a language research + designer since 2009), so here I am. So it is worthwhile if there is anything we can learn from each other, and share publicly. In short, I appreciate the discussion, because I don't want to make a design error or waste my effort "barking up the wrong tree".
If I understand correctly, per your clarification Babel-17 is making all functions pure, but I asserted my understanding in my prior post that within pure functions, the structured code is granularly opaque by employing per data instance shadowing. That does not make the containing function impure, so that is fine (and Copute and JavaScript can do that too, but JavaScript can't assert that the function is pure and closures are even opaque). One proposed difference for Copute, is the function is only a referentially transparent boundary if it is declared to enforced as pure.
If I understand correctly that we share the goal of facilitating integration/interoption of (and transition between) imperative and pure functional programming, then why would we not need both impure and pure functions?
My understanding is that programs are more than just functions, they are compositions of rich semantic paradigms which can be declared with typing. And life is not entirely entirely referentially transparent. For example, the Observer pattern requires a callback (external state), thus it can never be a referentially transparent construction. However, in an idealized world, we can invert the Observer pattern as Functional Reactive (FPR):
http://www.mail-archive.com/haskell-cafe@haskell.org/msg66898.html
http://www.haskell.org/haskellwiki/Phooey
FPR is not theoretically less efficient, because it could be optimized to only recompute the portions of the global FPR chain that have changed, thus it isn't different from the propagation of state dependencies in the Observer pattern in that respect. In both Observer and FPR, the order of propagation of state change can be coded indeterminate or deterministically. However, in some respects Observer pattern is easier, because one can just slap on any where, without too much concern for overall design (however, this sloppiness will manifest as race conditions and other out-of-order situations, etc).
So I will not argue theoretically that it is impossible to make every possible program a composition of referentially transparent functions (that where we want to be); however, in practice the transition from where we are today in the computer world to the future, will be eased if one can use a referentially opaque function sometimes. Sometimes "quick and dirty" is what gets the job done and often what gets the job done, is what is popular. So my idea with Copute was to make the transition to pure functional programming as familiar and painless as possible...and in my case to users of afaik the most popular computer language in the world, JavaScript. Also because then I will have a ready market since afaik there is no good pure FP compiler with great typing system, and optional integrated dynamic typing, that outputs JavaScript? (I started by studying HaXe strengths and weaknesses, then I learned Haskell, etc).
Back to the more fundamental theory point. Although we want to have programs which are composed of entirely referentially transparent (i.e. pure) functions, we encounter gridlock where we need to refactor up a tree of pure FP code, if some function on a branch wasn't granular enough (i.e. conflated some data which really should be orthogonal). So eventually our ideal world of pure FP every where becomes untenable-- in short, it can't scale in a wide area composition. It is sort of analogous to C++ "const" blunder, which had to propagate every where in order to be used any where.
Thus we are likely to use pure FP for problem spaces that are well encapsulated, but we will continue to use imperative coding for more dynamic social integration. Thus it seemed to, critical that our typing system will make these boundaries transparent (explicit, not silent inferred polymorphism).
Do you have any comments that could spur another round of exchange? Or does this just seem wrong or irrelevant from your perspective? I am eager to learn from any one who is willing to share. Thanks.
===========
ADD: Coase's theorem applies, i.e. that there is no external reference point, thus all boundaries will fail (be subverted by the free market of the fact that the universe is trending to maximum disorder). Thus the all pure FP or nothing boundary at the function, is not realistic. It is not in harmony with thermodynamics and entropy.
Copute as a startup...
Yesterday was an interesting day, because I was excited to be getting some exchanges with people who do the kind of work I do, and thus can challenge, inspire, interact with me on that intellectual level.
The emotional reaction was I believe to want to as quickly as possible find a way to get some such people to work together with me, because it would be immensely fun, exciting, and productive. I think in large part, that is why these guys work in the Silicon Valley-- for the entire social aspect of the synergy.
So right there that probably kills any chance of others working with me at this juncture, given I am in the Philippines and have no desire to go work in the Silicon Valley (or any other tech center such as San Antonio, etc).
But as I got to thinking more about the economics of what I am doing, I realized that I probably shouldn't be paying any one a dime. The reason is because what will make this fly is it being open-source, which means people contribute because they know they own it, in that they can use the sum of the work any way they wish to, now and into the future.
So the only way to get an open-source project rolling with contributions, is to first deliver an initial product which is useful enough, that people start contributing because they need some aspect of what is already there, combined with something else they need.
So it is all about need. That is key.
Also I don't think you will get anyone to contribute to open-source if they think you are going to charge for access. So I think it is very key to make it clear that the model for the language is no charge for access. It has to be stressed that I own the Copute.com domain, and may provide an optional way for developers to monetize their efforts, but that it is entirely optional marketing side-show, and the Copute itself is open-source, public domain, and not owned by any one. No strings attached.
So I think the correct time to bring in investors, is when we go to launch the Compute.com monetization engine, which has to come after the Copute language is done and already generating significant contribution.
So this means, I am on my own for the time-being. If anyone joins to help me at this stage, it will be a gift from God, because it would take someone LIKE MYSELF who is utterly convinced of the importance of Copute and wanting to dedicate themselves to it, without any certainty of financial gain.
I don't think I am likely to find another person like myself. I got a little bit inspired to read Joseph Perla's blog and realize there is a bright young man who shares some of my philosophy (but not all). But there is still a big gap between that, and being LIKE ME, with respect to Copute.
Of course we know what happened the last time I got inspired about a young programmer, Nicolas Cannesse, because I was admiring his work on HaXe, but it turned very bitter when he shot down every idea I had about improving HaXe and banned me from his discussion group mailing list. However, Copute is in large part influenced by HaXe, so my tribute to Nicolas is implicit. So it is not bitter after all. I even wrote in private to others, that I don't have to be frustrated with Nicolas, I wish him the very best.
The emotional reaction was I believe to want to as quickly as possible find a way to get some such people to work together with me, because it would be immensely fun, exciting, and productive. I think in large part, that is why these guys work in the Silicon Valley-- for the entire social aspect of the synergy.
So right there that probably kills any chance of others working with me at this juncture, given I am in the Philippines and have no desire to go work in the Silicon Valley (or any other tech center such as San Antonio, etc).
But as I got to thinking more about the economics of what I am doing, I realized that I probably shouldn't be paying any one a dime. The reason is because what will make this fly is it being open-source, which means people contribute because they know they own it, in that they can use the sum of the work any way they wish to, now and into the future.
So the only way to get an open-source project rolling with contributions, is to first deliver an initial product which is useful enough, that people start contributing because they need some aspect of what is already there, combined with something else they need.
So it is all about need. That is key.
Also I don't think you will get anyone to contribute to open-source if they think you are going to charge for access. So I think it is very key to make it clear that the model for the language is no charge for access. It has to be stressed that I own the Copute.com domain, and may provide an optional way for developers to monetize their efforts, but that it is entirely optional marketing side-show, and the Copute itself is open-source, public domain, and not owned by any one. No strings attached.
So I think the correct time to bring in investors, is when we go to launch the Compute.com monetization engine, which has to come after the Copute language is done and already generating significant contribution.
So this means, I am on my own for the time-being. If anyone joins to help me at this stage, it will be a gift from God, because it would take someone LIKE MYSELF who is utterly convinced of the importance of Copute and wanting to dedicate themselves to it, without any certainty of financial gain.
I don't think I am likely to find another person like myself. I got a little bit inspired to read Joseph Perla's blog and realize there is a bright young man who shares some of my philosophy (but not all). But there is still a big gap between that, and being LIKE ME, with respect to Copute.
Of course we know what happened the last time I got inspired about a young programmer, Nicolas Cannesse, because I was admiring his work on HaXe, but it turned very bitter when he shot down every idea I had about improving HaXe and banned me from his discussion group mailing list. However, Copute is in large part influenced by HaXe, so my tribute to Nicolas is implicit. So it is not bitter after all. I even wrote in private to others, that I don't have to be frustrated with Nicolas, I wish him the very best.
Realized Haskell vs. Compute are fundamentally equivalent in power, except for...
Copute has one key fundamental advantage:
http://code.google.com/p/copute/issues/detail?id=39#c2
Plus, Copute has a more intuitive syntax for imperative programmers (the bulk of programmers):
http://code.google.com/p/copute/issues/detail?id=39#c3
http://code.google.com/p/copute/issues/detail?id=39#c2
Plus, Copute has a more intuitive syntax for imperative programmers (the bulk of programmers):
http://code.google.com/p/copute/issues/detail?id=39#c3
Why Copute won't support catching exceptions
re: Steven Obua just described my Copute project, very substantially
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-99
Agreed Go is aimed towards systems programming and isn't an optimum solution for the use case I am driving towards either. And moreover, I think the future of systems programming is to have apps that are provably correct (Linus Torvald admitted such a remote possibility for "designer languages"), so I think Go is also going to be superceded in time, but that might be a long time from now. Good to see Thompson is working on an upgrade for C.
The problem is that an exception causes order dependence, which removes the ability to parallelism the implementation.
Although afaik an exception does not violate referential transparency literally, it does remove the orthogonality of functions, which for me is one of the key outcomes of referential transparency, and orthogonality is necessary for composability. Whereas, typing is the lego patterns for composability. (btw the name Co-pute is driving towards cooperation and composition, I am aiming for wide scale web mashup language)
If the programmer expects the possibility of an exception, then the function needs to declare that in its types. Afaics, there is no shortcut of exceptions without type that maintains composability and concurrency. The programmer can make a type NeverZero, if prefers to attack the problem on input, sort of analogous to reversing the Observer pattern to Functional Programming Reactive as I mentioned in my 2nd post above.
Yeah it is fugly to have to propagate exception cases every where. But that is life. The shortcut has a real important cost. And I agreed with the comments at Go, that exceptions turn into a convoluted mess, especially when one starts composing functions in different permutations.
==============
ADD: There a function A which inputs a function B, and A catches an expected exception, but note this is not declared in the type of A or B. So function B is input to A, but unlike other B in the past, this B catches the exception that A is expecting to catch. The programmer of B would have no way of knowing that A expected the same exception, because that is not declared in the types. Orthogonality and composability subverted. Whereas if B declared by returning a non-exception type, that it handles the exception, then problem resolved. So then A would be overloaded (folded) on return type of B, one version/guard of A that handles the exception and one that lets B handle it.
I can see how you get the elegant determinism by basically adding a "null" test on every return type of every function, instead of doing a longjmp, but afaics hiding the exception return type from the programmer causes the above problem.
I see no problem using exceptions when there is no function call inside the try block.
If I have made a wrong assumption or erroneous statement, I apologize in advance.
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-99
Agreed Go is aimed towards systems programming and isn't an optimum solution for the use case I am driving towards either. And moreover, I think the future of systems programming is to have apps that are provably correct (Linus Torvald admitted such a remote possibility for "designer languages"), so I think Go is also going to be superceded in time, but that might be a long time from now. Good to see Thompson is working on an upgrade for C.
The problem is that an exception causes order dependence, which removes the ability to parallelism the implementation.
- Code:
raise First + raise Second handle First => 1 | Second => 2,
what is the value of this expression? It clearly depends on the order
of evaluation.
Although afaik an exception does not violate referential transparency literally, it does remove the orthogonality of functions, which for me is one of the key outcomes of referential transparency, and orthogonality is necessary for composability. Whereas, typing is the lego patterns for composability. (btw the name Co-pute is driving towards cooperation and composition, I am aiming for wide scale web mashup language)
If the programmer expects the possibility of an exception, then the function needs to declare that in its types. Afaics, there is no shortcut of exceptions without type that maintains composability and concurrency. The programmer can make a type NeverZero, if prefers to attack the problem on input, sort of analogous to reversing the Observer pattern to Functional Programming Reactive as I mentioned in my 2nd post above.
Yeah it is fugly to have to propagate exception cases every where. But that is life. The shortcut has a real important cost. And I agreed with the comments at Go, that exceptions turn into a convoluted mess, especially when one starts composing functions in different permutations.
==============
ADD: There a function A which inputs a function B, and A catches an expected exception, but note this is not declared in the type of A or B. So function B is input to A, but unlike other B in the past, this B catches the exception that A is expecting to catch. The programmer of B would have no way of knowing that A expected the same exception, because that is not declared in the types. Orthogonality and composability subverted. Whereas if B declared by returning a non-exception type, that it handles the exception, then problem resolved. So then A would be overloaded (folded) on return type of B, one version/guard of A that handles the exception and one that lets B handle it.
I can see how you get the elegant determinism by basically adding a "null" test on every return type of every function, instead of doing a longjmp, but afaics hiding the exception return type from the programmer causes the above problem.
I see no problem using exceptions when there is no function call inside the try block.
If I have made a wrong assumption or erroneous statement, I apologize in advance.
More on composability and exceptions
Still "talking shop" with the Steven Obua.
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-103
I was making two related points, one that concurrency is not achieved unless we use a Maybe type. Afaics, you've since clarified for me (thank you), that you are using a Maybe type, and you've hidden (made implicit) the monadic action in a try-catch abstraction.
Agreed, exceptions can be concurrent if they don't employ the longjmp paradigm, and instead (even implicitly) employ the Exception monad on the Maybe algebraic type, where the compiler is doing the monadic lifting behind the scenes.
So our discussion is the choice between do that implicitly and doing it with static typing. The advantage of doing it with static typing is that it can be propagated automatically with a monad type (what afaics Babel-17 achieves), and we gain the ability to overload on type (how Haskell and Copute do it).
The second point I was making is that static typing is critical for composability.
With dynamic typing, the only way to prove correctness is with assertions on inputs (i.e. exceptions). These assertions are just types[1], e.g. instead throwing an exception to insure NonZero, just make the input type a NonZero type.
How do we compose functions when their invariants are not explicitly stated by type, but rather hidden in their implementation as assertions that will throw exceptions? We end up with spaghetti, because the composition of the invariants are not being checked by the compiler. Non-declared assumptions get lost. I have 25+ years coding in spaghetti. Is there another way to deal with it that I am not aware of?
For composability, afaics the exceptions must be coded on the return type (post-conditions[1]), and/or on the input types (pre-conditions[1]).
[1] http://lambda-the-ultimate.org/node/1551#comment-64186
=======================================
=======================================
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-105
Agreed I definitely want to try to avoid subjective injection (because it won't help either of us produce a better product). So we are not going to fight that irrational war, because we will delineate what we can conclude objectively. You will correct me and point out where I am making a subjective conclusion.
Agreed, that the tension in composability hinges around the granularity (more completely, the fitness) with which the invariants can be declared and enforced/checked. Agreed also that to the degree that one's statically typed implementation does not fully express the invariant semantics, then aliasing error will spill out chaotically (aliasing error manifests as noise).
Isn't the objective fallacy of arguing against static typing, that the alternative isn't better? Afaics, testing is never exhaustive due to the Halting problem (beyond getting some common use cases covered, which is not an argument against static typing because as you also said, it is needed in any case), because one would need to test every possible composition at all permutations of N potential mashup companion functions before we even know what they will be. Note, there were some comments at LtU yesterday about impracticality and inadequacy of testing for proving associativity for Guy steele's example. Documentation is not an argument against static typing, as it is needed in any case.
Static typing is a first level check, it enables the compiler to check some errors. And to the degree one strives to produce types that fully express the invariant pre and post-conditions at all semantic levels, then the degree of checking is increased (but sadly aliasing error isn't a linear phenomenon, so that might not help). This is not security against all possible semantic errors, but at least remaining errors are in those that slipped through the design of the types (even though they manifest as aliasing error far from the source). Types can be reused, so we can put a lot of effort into designing them well.
There is a tradeoff. As the types become more restrictive, they become more difficult to compose. The C "const" blunder being one of the infamous painful examples (I've been hopefully careful to not repeat this "const" in Copute). This is real life injecting itself in our attempts for a Holy Grail, which there never will be of course. Actually it is a futures contract which is the antithesis of natural law. "const" could never be assigned to a non-const in any scenario (there was no escape route), it thus infected the entire program.
Thanks for pointing out the exception is not a result type and thus a design error. I agree that declaring exceptions as return types is a design error (an ad-hoc hack), because exception is not a proper post-condition, i.e. it is a non-result semantic and thus a design error to return it as a result. I don't see objectively how an implicitly lifted exception monadic action try-catch is not also a design error by same logic? Divide-by-zero means our result is NFG (no fugling good, lol ), which is not a result at all, it is different semantic entirely, so normally we are designing our code so that exception will never occur. So I offer a NonZero argument type, the caller can construct that type and check at run-time that not passing a 0 value. The dynamic checks are there in static typing, but they are forced to be checked (note constructor NonZero( 0 ) would throw an assumed uncaught exception, aka an assert, i.e. stack trace into the debugger because caller didn't even do the check). If the caller already had a NonZero type, they don't need to check it again. Afaics, that is more PROVABLY correct than returning an exception, because it describes the compiler checked invariants, rather than an ad-hoc return which is not a return semantic. So I am not arguing for exception return type except where you argue for try-catch as an ad-hoc "solution" (perhaps that wasn't reified in my prior post), but rather for declaring the invariant arguments and avoiding exceptions entirely, where practical (i.e. correct design).
After all, we are not in religious war, because Copute supports dynamic typing too. I understand in some (maybe even most) use cases, static typing does not provide reasonable benefits to justify its use. It can cause tsuris for very minute gains in checking. Afaik, inferred typing goes a long way to increase the utility. And potentially Map reduce constructed data types will make typing much more useful, which pertains to the title of this blog page (link is to my comments which were censored from LtU). I am not criticizing Babel-17 for not having static typing (and am encouraging you to pursue your design ideas), I only asked that we characterize any tradeoffs of potentially adding it later incrementally, never, or now (as I am trying to do in one big difficult design+implementation step). And afaics, this discussion has helped document/reify/correct some of my own understanding. I hope we have also clarified for your readers some of your design decisions for Babel-17. What else can I say, but a big sincere thank you.
Are there any more objective observations we can make on this issue? Any corrections?
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-103
I was making two related points, one that concurrency is not achieved unless we use a Maybe type. Afaics, you've since clarified for me (thank you), that you are using a Maybe type, and you've hidden (made implicit) the monadic action in a try-catch abstraction.
Agreed, exceptions can be concurrent if they don't employ the longjmp paradigm, and instead (even implicitly) employ the Exception monad on the Maybe algebraic type, where the compiler is doing the monadic lifting behind the scenes.
So our discussion is the choice between do that implicitly and doing it with static typing. The advantage of doing it with static typing is that it can be propagated automatically with a monad type (what afaics Babel-17 achieves), and we gain the ability to overload on type (how Haskell and Copute do it).
The second point I was making is that static typing is critical for composability.
With dynamic typing, the only way to prove correctness is with assertions on inputs (i.e. exceptions). These assertions are just types[1], e.g. instead throwing an exception to insure NonZero, just make the input type a NonZero type.
How do we compose functions when their invariants are not explicitly stated by type, but rather hidden in their implementation as assertions that will throw exceptions? We end up with spaghetti, because the composition of the invariants are not being checked by the compiler. Non-declared assumptions get lost. I have 25+ years coding in spaghetti. Is there another way to deal with it that I am not aware of?
For composability, afaics the exceptions must be coded on the return type (post-conditions[1]), and/or on the input types (pre-conditions[1]).
[1] http://lambda-the-ultimate.org/node/1551#comment-64186
=======================================
=======================================
http://phlegmaticprogrammer.wordpress.com/2011/01/15/how-to-think-about-parallel-programming-not/#comment-105
Agreed I definitely want to try to avoid subjective injection (because it won't help either of us produce a better product). So we are not going to fight that irrational war, because we will delineate what we can conclude objectively. You will correct me and point out where I am making a subjective conclusion.
Agreed, that the tension in composability hinges around the granularity (more completely, the fitness) with which the invariants can be declared and enforced/checked. Agreed also that to the degree that one's statically typed implementation does not fully express the invariant semantics, then aliasing error will spill out chaotically (aliasing error manifests as noise).
Isn't the objective fallacy of arguing against static typing, that the alternative isn't better? Afaics, testing is never exhaustive due to the Halting problem (beyond getting some common use cases covered, which is not an argument against static typing because as you also said, it is needed in any case), because one would need to test every possible composition at all permutations of N potential mashup companion functions before we even know what they will be. Note, there were some comments at LtU yesterday about impracticality and inadequacy of testing for proving associativity for Guy steele's example. Documentation is not an argument against static typing, as it is needed in any case.
Static typing is a first level check, it enables the compiler to check some errors. And to the degree one strives to produce types that fully express the invariant pre and post-conditions at all semantic levels, then the degree of checking is increased (but sadly aliasing error isn't a linear phenomenon, so that might not help). This is not security against all possible semantic errors, but at least remaining errors are in those that slipped through the design of the types (even though they manifest as aliasing error far from the source). Types can be reused, so we can put a lot of effort into designing them well.
There is a tradeoff. As the types become more restrictive, they become more difficult to compose. The C "const" blunder being one of the infamous painful examples (I've been hopefully careful to not repeat this "const" in Copute). This is real life injecting itself in our attempts for a Holy Grail, which there never will be of course. Actually it is a futures contract which is the antithesis of natural law. "const" could never be assigned to a non-const in any scenario (there was no escape route), it thus infected the entire program.
Thanks for pointing out the exception is not a result type and thus a design error. I agree that declaring exceptions as return types is a design error (an ad-hoc hack), because exception is not a proper post-condition, i.e. it is a non-result semantic and thus a design error to return it as a result. I don't see objectively how an implicitly lifted exception monadic action try-catch is not also a design error by same logic? Divide-by-zero means our result is NFG (no fugling good, lol ), which is not a result at all, it is different semantic entirely, so normally we are designing our code so that exception will never occur. So I offer a NonZero argument type, the caller can construct that type and check at run-time that not passing a 0 value. The dynamic checks are there in static typing, but they are forced to be checked (note constructor NonZero( 0 ) would throw an assumed uncaught exception, aka an assert, i.e. stack trace into the debugger because caller didn't even do the check). If the caller already had a NonZero type, they don't need to check it again. Afaics, that is more PROVABLY correct than returning an exception, because it describes the compiler checked invariants, rather than an ad-hoc return which is not a return semantic. So I am not arguing for exception return type except where you argue for try-catch as an ad-hoc "solution" (perhaps that wasn't reified in my prior post), but rather for declaring the invariant arguments and avoiding exceptions entirely, where practical (i.e. correct design).
After all, we are not in religious war, because Copute supports dynamic typing too. I understand in some (maybe even most) use cases, static typing does not provide reasonable benefits to justify its use. It can cause tsuris for very minute gains in checking. Afaik, inferred typing goes a long way to increase the utility. And potentially Map reduce constructed data types will make typing much more useful, which pertains to the title of this blog page (link is to my comments which were censored from LtU). I am not criticizing Babel-17 for not having static typing (and am encouraging you to pursue your design ideas), I only asked that we characterize any tradeoffs of potentially adding it later incrementally, never, or now (as I am trying to do in one big difficult design+implementation step). And afaics, this discussion has helped document/reify/correct some of my own understanding. I hope we have also clarified for your readers some of your design decisions for Babel-17. What else can I say, but a big sincere thank you.
Are there any more objective observations we can make on this issue? Any corrections?
Shocking Java comparison
I was shocked how many things Java can not do (well), that Copute proposes to do:
http://copute.com/dev/docs/Copute/ref/intro.html#Java
This list became more exhaustive after researching how I might compile Copute code to Java source code-- it actually can probably be done, but the Java code will be ugly and bloated, almost impossible to follow the semantics.
http://copute.com/dev/docs/Copute/ref/intro.html#Java
This list became more exhaustive after researching how I might compile Copute code to Java source code-- it actually can probably be done, but the Java code will be ugly and bloated, almost impossible to follow the semantics.
Page 4 of 11 • 1, 2, 3, 4, 5 ... 9, 10, 11
Page 4 of 11
Permissions in this forum:
You cannot reply to topics in this forum