GoldWeTrust.com
Would you like to react to this message? Create an account in a few clicks or log in to continue.

Computers:

5 posters

Page 4 of 11 Previous  1, 2, 3, 4, 5 ... 9, 10, 11  Next

Go down

Computers: - Page 4 Empty re: HaXe and Copute

Post  Shelby Sun Sep 19, 2010 8:24 am

dash wrote:
Shelby wrote:I wrote a huffman encoder in HaXe (Flash, etc) for JPEG in 2008

Finally you may have actually contributed something that is useful to me. I've never heard of HaXe. I've often wanted to author flash files but want to use open source tools using traditional unix-style Makefile, text editing, etc. (as opposed to point and click user interfaces). Maybe this HaXe is the ticket.

Thanks!

Thanks for the tip on goertzal algorithm, note that Copute aims to be the holy grail upgrade from HaXe, and I had initially planned to compile to HaXe, so therefor Flash, PHP, C++, JavaScript, Nevo VM, and (soon) Java would all be targets.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Re: Computers:

Post  Guest Sun Sep 19, 2010 1:21 pm

Shelby wrote:, note that Copute aims to be the holy grail upgrade from HaXe, and I had initially planned to compile to HaXe, so therefor Flash, PHP, C++, JavaScript, Nevo VM, and (soon) Java would all be targets.

June 2008 I wrote a BASIC interpreter with graphics and sound additions, hoping to get my kids interested in programming.

http://www.linuxmotors.com/SDL_basic/

The whole project took under 2 weeks. Originally I was parsing the basic program manually, the way BASIC used to work when interpreted. I finished the thing, got it working completely, when the idea of separating out the parsing from the execution would be a way of boosting performance. I wanted to have to only pay the price of parsing syntax once.

So I reimplemented the entire interpreter. I made a Bison (Yacc) grammar and had the parser output virtual machine code for a stack based virtual machine which I also implemented. That way if you had a syntax error you could know immediately at run time -- say if a GOTO went to a line that didn't exist. With the traditional approach one would have to exercise the entire program to even know if there was a stupid typing error. I was very happy with the end result, it was much faster than my original version.

Later I had learned about a few open source attempts to create flash content, and they even released the entire spec on the flash file format, including their virtual machine. It had occured to me to modify the basic to output virtual code for the flash interpreter, but I never did it.

I'm sure you must be familiar with Bison. You'd want to use that approach if you want to build your Copute compiler.

Computers: - Page 4 Basic

A note on size for the basic:
vmachine.c = 1220 lines of C code
basic.c = 404 lines
grammar.y = 1267 lines
Everything else is trivially small...

ETA: I want to relate a story. A friend of mine criticized my choice of BASIC as a language to introduce to my kids, seeing as how there are better, cleaner languages that are more up to date (python for example, maybe ruby). And he said the GOTO statement was bad, and he quoted Dijkstra's long opposition to its use. I told my friend, who is in his 60's, that it's strange that in order to make his point he has to bring up Dijkstra, considering that my friend has been programming longer than Dijkstra had been when Dijkstra formed his opinion about GOTO being bad. I told my friend that he was fully qualified to have his own opinion about GOTO, and I in fact valued his opinion more than whatever this fellow Dijksta's opinion was.

At what point do we realize we're just as qualified as the "experts" to decide what is right and wrong? Evidently for some of us we never do.

I learned how to program with BASIC first. I was able to "overcome" the bad practices that BASIC leads to. And having that experience I was able to realize the value of the improvements that came after BASIC. Why deny my kids that evolutionary history? Moreover, modern structured languages are so nitpicky on syntax that it takes a lot of fun out of it. BASIC is quite forgiving that way, the syntax is easy to get right. Compared to 'C' where it is SO easy to get it wrong. I was trying to lower the barrier to entry into programming in the first place. Even a bad programmer has a vast advantage over the nonprogrammer...

Dijkstra is an interesting fellow. I like him MUCH more than Knuth.
http://en.wikipedia.org/wiki/Edsger_W._Dijkstra

Guest
Guest


Back to top Go down

Computers: - Page 4 Empty Howdy Dash...

Post  SRSrocco Sun Sep 19, 2010 6:31 pm

CAN YOU SEE ME?

Computers: - Page 4 Frog_w14

DASH...good to see you are still alive. Actually I have missed you and your debates. How is everything going? Looks like you are still doing well programming. Anyhow....wish you might stop in the SILVERSTOCKFORUM and say a few words once in a while.

best regards,

steve


SRSrocco

Posts : 22
Join date : 2008-11-02

Back to top Go down

Computers: - Page 4 Empty Re: Computers:

Post  Guest Sun Sep 19, 2010 11:19 pm

SRSrocco wrote:DASH...good to see you are still alive.

Thanks! A lot of my memories of those old conversations have been recycled though... I'm drawing a blank as to details, although I do recall the handle "SRSrocco".

Guest
Guest


Back to top Go down

Computers: - Page 4 Empty Re: Computers:

Post  Shelby Mon Sep 20, 2010 4:30 am

dash wrote:
Shelby wrote:, note that Copute aims to be the holy grail upgrade from HaXe, and I had initially planned to compile to HaXe, so therefor Flash, PHP, C++, JavaScript, Nevo VM, and (soon) Java would all be targets.

June 2008 I wrote a BASIC interpreter...

So I reimplemented the entire interpreter. I made a Bison (Yacc) grammar and had the parser output virtual machine code for a stack based virtual machine which I also implemented...

I'm sure you must be familiar with Bison. You'd want to use that approach if you want to build your Copute compiler.

If you went to the Copute.com site, you would see I already implemented my own custom grammar and parser generator in JavaScript.

dash wrote:I learned how to program with BASIC first. I was able to "overcome" the bad practices that BASIC leads to. And having that experience I was able to realize the value of the improvements that came after BASIC. Why deny my kids that evolutionary history?...

Agreed, but after they learn it, teach them to drop GOTO.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Technology will drive value of mines toward 0 (Zero)

Post  Shelby Wed Sep 22, 2010 1:14 pm

I am not talking about tomorrow, but this New York Times article from 1897, supports my hypothesis:

http://query.nytimes.com/mem/archive-free/pdf?res=F00E16FD3D5414728DDDAB0894DD405B8785F0D3

It says the ants mine and selectively choose particles.

I had the thought, perhaps it was 2007, that nanotechnology is going to destroy the value of existing mines.

Because we will soon build little bots, smaller than ants, which will mine by munching on the rock and sorting the various minerals, at the particulate level. In other words, the mill will be decentralized into the earth. No chemical post-processing will be necessary. These nanobots will build piles of sort minerals.

Thus the huge capitalization of mines won't make any sense, as the cost of minerals will plummet to near 0.

Nanotech minature bots which might reduce the cost (and energy cost) of mining gold by orders-of-magnitude enabling lower grades to be extracted at accelerated rates, thus destroying the stocks-to-flows principle.

My ant as external brain neuron research paper applies:

https://goldwetrust.forumotion.com/knowledge-f9/book-ultimate-truth-chapter-6-math-proves-go-forth-multiply-t159-15.htm#3640

==========
UPDATE:
Shelby wrote:...Nanotech minature bots which might reduce the cost (and energy cost) of mining gold by orders-of-magnitude enabling lower grades to be extracted at accelerated rates, thus destroying the stocks-to-flows principle.

Actually that would only be temporary. A new higher level of stocks would accumulate, and flows would eventually stabilize as a shrinking % of stocks at the higher rate.

Also production of everything in economy would increase too, so gold would retain its relative value, probably increase in value.

Note the depreciation of the value of the mines would be relative to former relative FIAT value (but everything would be getting cheaper in fiat, but remember stocks are leveraged to dividends). After they stabilize, then they would be investments again.

At that point I'd be invested in nano-bots. I'm guessing this could take awhile and won't catch anyone by surprise.

Agreed. What I think could catch you by surprise perhaps is capital controls and effectively confiscation of your brokerage account. At some point the western nations have to go after the remaining capital in the system.

It may come from an international ruling such as Basel.

But even more likely is another (maybe many more) round(s) of massive selling of stocks in a panic, especially if the metals are sold off too.

Also fraud and deceit will radically accelerate (because westerners have nothing to lose any more, the veneer of "Leave it to Beaver" is entirely gone, people will become cutthroat and callous).

Also the new economy will detach and you think you are doing well with +30% gains per year in your networth, but in reality you will be falling behind rapidly. So the nanotech is not coming just from one direction, it is coming at you from everything direction. Example look at what I am working on. Things like that will happen under your nose and you won't see it until you wake and realize your fiat just isn't worth anything in the new economy.

You come to me and you offer me $billion and I say sorry, I can't do anything with that, I have more cash than I can find suitable experts to hire. I will say I need brains, not cash.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Information Age economy

Post  Shelby Wed Sep 22, 2010 9:31 pm

Read also the prior post above.

In case any one didn't understand, my point is that we are moving towards an information economy age, out of the industrial age. We can tell that the industrial age is dying, because China is driving the profit margins negative. Automation is the only way to do manufacturing profitably. The race is on.

The problem is that in an information economy, capital is nearly all knowledge based. And really smart people are not motivated to be bought by an investor, they are motivated to invest their time and get a % of the company.

So it will take relatively smaller amounts of traditional money to form these new companies. This already the case, Facebook was formed with $200,000.

Mostly this is due to computers. The physical sciences are being reduced to digital programming, e.g. biotech, nanotech, etc are mostly computer science (I know because I've look at the way they do their research).

Actually what is happening is the knowledge is becoming wealth.

The knowledge holders will take the capital of the capitalists, simply by ignoring them and capturing the market. The capitalists return on capital will not keep up, so their purchasing power will fade away.

The counter argument that can be made is that knowledge isn't fungible so we will still need gold as store-of-value. But that misses the point. Gold earns no income, the knowledge holders will be taking away the income sources which will be more knowledged based.

Also because the intractable problems facing the world at this time where the capitalists are trying to grab a monopoly, the knowledge holders are accelerating disruptive technology. We will see an explosion of technology in the next 10-20 years that will cause more change than in the entire history of mankind.

The Mathusians will be wrong again about technology, just as they have been every century at the important juncture.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Micro kernal issues

Post  Shelby Sun Sep 26, 2010 4:10 am


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Learned more about Max OS X and micro-kernel issues

Post  Shelby Sun Sep 26, 2010 5:55 pm

Okay I have been doing a fair amount of reading about the issues of debate between Linus Torvalds (Linux creator) and the proponents of micro-kernels and the multiple failures thus far of GNU Hurd. I am reasonably well qualified to understand these sort of concurrency issues, because of the work I did already on Copute.

Mac OS X uses the Mach micro-kernel in conjunction with portions BSD unix, but these OS services are running as a kernel process, not a user process, and Mach is not very micro-kernel any way. So the micro-kernel aspect of Max OS X is really just the modularization of some of the core kernel, but no isolation. Thus there is no architecture in Mac OS X for the kind of security that would allow rogue applications to run without harming the rest of the system. The iPhone iOS derives from the same Darwin lineage as Mac OS X.

===============================

One of the big stumbling blocks (since at least 2002, if not 1992) appears to be that Unix is based around file/stream handles for accessing resources and devices. The problem is how to pass around these handles securely between user mode processes:

http://www.coyotos.org/docs/misc/linus-rebuttal.html
http://www.opensubscriber.com/message/l4-hurd@gnu.org/12608235.html
http://lists.gnu.org/archive/html/l4-hurd/2002-12/msg00003.html
http://www.coyotos.org/docs/ukernel/spec.html#frontmatter-2.2

Apparently the lack of asynchronous form of inter-process procedure call (IPC) in the micro-kernel complicates or reduces the performance of any solution in their view. Also the inability of the micro-kernel to enforce capability meta-data on shared resources (and the IPC call itself) is claimed to be a security hole and hindrance to solve in the user process layer. I do agree that the receiver of IPC should be protected against rate of DDoS, until it approves the receipt of future IPC from a process.

The fundamental issue is the one I am trying to solve with Copute, which is why I guess I bother to write about it. And that is the issue that sharing a resource is security hole for the same reason that it kills inter-function/process composability scaling, i.e. it removes referential transparency. Linus is correct where he says that the solution ultimately has to come from the language layer. Apparently Microsoft Research is aware of this with their work on Singularity since 2003. Jonathan Shapiro (PhD) formerly of Coyotes, BitC, and EROS which was working to improve Hurd, joined Microsoft Research in 2009 to work on embedded derivatives of Singularity. However, afaics Singularity attacks the issue of trust, but doesn't address the issue of referential transparency, wherein afaics trust is not the issue that needs to be solved. They admit this:

Second, Singularity is built on and offers a new model for safely extending a system or
application’s functionality. In this model, extensions cannot access their parent’s code or data
structures, but instead are self-contained programs that run independently. This approach
increases the complexity of writing an extension, as the parent program’s developer must define a
proper interface that does not rely on shared data structures and an extension’s developer must
program to this interface and possibly re-implement functionality available in the parent.
Nevertheless, the widespread problems inherent in dynamic code loading argue for alternatives
that increase the isolation between an extension and its parent. Singularity’s mechanism works for
applications as well as system code; does not depend on the semantics of an API, unlike domain14
specific approaches such as Nooks [49]; and provides simple semantic guarantees that can be
understood by programmers and used by tools.

The principal arguments against Singularity’s extension model center on the difficulty of
writing message-passing code. We hope that better programming models and languages will
make programs of this type easier to write, verify, and modify. Advances in this area would be
generally beneficial, since message-passing communication is fundamental and unavoidable in
distributed computing and web services
. As message passing becomes increasingly familiar and
techniques improve, objections to programming this way within a system are likely to become
less common.

Fundamentally, if the processes on the computer share state, then they can not be secure, nor does robustness and reliability scale. Linus was correct that for the current state of programming, the micro-kernel gains nothing because one ends up with monolithic spaghetti any way.

So it looks like they are all waiting for my Copute to take over the world.

But how for example can one share files between processes, i.e. how to handle state that must be shared? Well state is what you want to push out to the highest most functions any way in order to maximize referential transparency, so state must be handled with permissions (i.e. capabilities), and more finely grained than Unix's ower, group, other tuple. So shared state should be owned by an interface and that interface should decide which permissions it requires for inquiry (DDoS not denied), read, and write access. For example, the user login interface might expose an interface that allows an interface to sign some state which is stored encrypted with the user password, which can only be retrieved by that interface's signature. The web browser could then for example store cookies securely.

Singularity doesn't use memory protection because it assumes only trust code is run and there are no bugs in the trusted verification. Otherwise one would want some memory protection so that processes can not access the memory of other process.

http://esr.ibiblio.org/?p=2635&cpage=1#comment-280393

Shelby aka Jocelyn wrote:Linus is correct that micro-kernel degenerates to a monolithic mush, unless there exists the system-wide solution that ultimately has to come from the language (virtual machine) layer. I have deduced that issue is most fundamentally that sharing a resource is security hole for the same reason that it kills inter-function/process composability scaling, i.e. it removes referential transparency. Apparently Microsoft Research is aware of this with their work on Singularity.

One of the big stumbling blocks for GNU Hurd (since at least 2002, if not 1992) appears to be that Unix is based around file/stream handles for accessing resources and devices. One referential INtransparency problem is how to pass around these handles securely between user mode processes:

http://www.opensubscriber.com/message/l4-hurd@gnu.org/12608235.html
http://www.coyotos.org/docs/ukernel/spec.html#frontmatter-2.2

Apparently the lack of asynchronous form of inter-process procedure call (IPC) in the micro-kernel complicates or reduces the performance of any solution in their view. Also the inability of the micro-kernel to enforce capability meta-data on shared resources (and the IPC call itself) is claimed to be a security hole and hindrance to solve in the user process layer.


Last edited by Shelby on Wed Sep 29, 2010 8:29 am; edited 2 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty "hacker" vs. "cracker"

Post  Shelby Wed Sep 29, 2010 9:36 pm

http://esr.ibiblio.org/?p=2646&cpage=1#comment-280504

Shelby aka Jocelyn wrote:Someone who 'cracks" is also contributing to the evolution of security and progress. If it is not a victim-less action, it probably carries some civil and/or criminal liability. The moral judgment is irrelevant to me as a scientist.

Note Eric Raymond at the above site, is censoring my posts.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Video Steganography Signatures

Post  Shelby Sat Oct 02, 2010 4:18 am

Video Steganography Signatures

"Preventing unauthorized video publication without police or centralism"

Webcam (video) chat is on the rise, but one threat is that a stranger will publish a captured chat session anonymously.

When the chat is only broadcast to one receiver and publication by the receiver is not authorized by the sender, it is theoretically possible for the sender to save a copy, and the known details of the identity of the receiver (i.e. a captured copy of the receiver's video broadcast). Imagine there was a server that aggregated these into a database, and allowed any browser to request the identity of the person who was the receiver of a published video stream. This would certainly discourage unauthorized publication.

But this could be subverted by making lossy (compression or artifact) changes to the video which did not entirely destroy its view-ability and meaning. However, note that the higher the compression, the more difficult it becomes to make a lossy change that could not destroy its view-ability:

http://en.wikipedia.org/wiki/Steganography#Countermeasures_and_detection

In general, using extremely high compression rate makes steganography difficult, but not impossible. While compression errors provide a hiding place for data, high compression reduces the amount of data available to hide the payload in, raising the encoding density and facilitating easier detection (in the extreme case, even by casual observation).

Short of hand-painting every frame of a video, any consistent algorithmic change or random artifact, is going to be detectable, and more easily so at higher compression. The delta compressions between video frames are (e.g. 8x8 pixel) block-wise linear transformations which approximate the real-world 3D model, e.g. inverse perspective transformation to 3D space following by a rotation or translation of an approximated local 3D surface geometry, surface reflectivity texture, etc..

Thus at very high compression ratios, the possible deltas become can not contain artifacts (noise) above a very low threshold, otherwise the view-ability (meaning) totally diverges. Thus detecting equivalence reduces to measuring distance from the stored baseline copy.

The solution point being that the sender could broadcast a less compressed version and store a highly compressed one, then when comparing a published copy, (re-)compress it highly.

But if the receiver broadcasts to more than one receiver, then we need a method of detecting which receiver made an unauthorized publication. Since normally we can input video signals at much higher frame rates, than can be broadcast over the network in real-time or alternatively than can capture the speed of human motion (e.g. you can move your arm or mouth very far in 1 sec), then if the number (N) of receivers is not greater than input frame rate divided by broadcast frame rate, each one could be sent a different frame of the video. However, perfectly high compression would work against us, because most human motions are continuous (not jerky, no change of direction) at the sub-second granularity (e.g. swing the arm), so perfect compression would enable perfect inter-frame interpolation (ponder that deeply!). However, this actually turns itself inside-out, because in current state-of-the-art in video compression, we have no where near perfect compression (i.e. a complete global 3D model), but rather a local block-wise approximation (error with respect to perfect 3D model is low relative to local block and local frame context only). So although interpolation would be undetectable for some blocks, other blocks would show large errors from the stored copy of the expected value. So then if the frame spacing as sent to each receiver is random, then we can statistically detect which of the receivers is made the unauthorized publication with interpolated frames.

Thus I conclude it is possible for unauthorized publication, to identify the receiver among multiple receivers of a video broadcast.

Let's hope we can implement this soon.

Added in private email:

>My point was it isn't hard at all. Simply compress the image maximally and
>the compressed signal is the signature. It will be very difficult to alter
>the video in way that retains the meaning, without compressing to the same
>signature. And then form multiple videos by sampling at higher frame rates.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Oxymoronic "secret questions"

Post  Shelby Sun Oct 17, 2010 12:58 pm

http://www.marketoracle.co.uk/Article22098.html

Shelby wrote:Does any one else hate the "Secret Question" password recovery techniques? The name of my first dog, where I was born, etc... are not secrets! Especially after I answer the same question on several websites (you think they are all perfectly secure??). I always answers these (if I am forced to answer) with gibberish, e.g. "kjhbjkuytv78wsdnjksnkjjn891gb ckj". Then I call or email support if I need password recovery.

Oh and in rebuttal to Forest Lane's last comment:

https://goldwetrust.forumotion.com/knowledge-f9/book-ultimate-truth-chapter-6-math-proves-go-forth-multiply-t159-15.htm#3748

The universe is always trending to more disorder, i.e. more independent actors and more possibilities. It will always outrun the centralized orders. That is the definition of natural law.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty A better Paypal? Amazon payments! Yeah.

Post  Shelby Sun Oct 17, 2010 4:24 pm

Finally looks like something to fix the Paypal withholding balances fraud:

http://aws.amazon.com/fps/
http://aws.amazon.com/fps/pricing/
http://aws.amazon.com/devpay/

Compare to Paypal (but remember Paypal often confiscates balances or delays releasing them, and paperwork tsuris):

https://www.paypal.com/ph/cgi-bin/webscr?cmd=_display-receiving-fees-outside

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty The State Versus the Internet

Post  goldwave Mon Oct 18, 2010 3:30 am

From Paul Rosenberg the CEO of Cryptohippie USA, the leading provider of Internet anonymity.

http://cryptohippie.com/

http://www.lewrockwell.com/orig11/rosenberg-p1.1.1.html


goldwave

Posts : 1
Join date : 2009-01-22

Back to top Go down

Computers: - Page 4 Empty rebuttal to "Internet Kill Switch"

Post  Shelby Mon Oct 18, 2010 11:52 am

Thanks for posting that!

goldwave wrote:From Paul Rosenberg the CEO of Cryptohippie USA, the leading provider of Internet anonymity.

http://cryptohippie.com/

http://www.lewrockwell.com/orig11/rosenberg-p1.1.1.html



  1. It is impractical for them to enforce that proposed wiretap backdoor legislation, people will simply move to P2P and rogue anonymous software for doing so. This will be effective against large, popular sites though.
  2. "Internet kill switch" is technically impractical, because TCP/IP is self-healing and will route around any networks that are taken down. They can kill major arteries, but the internet will go virally P2P in a very short time.
  3. Technically, SecureBGP (BGPSEC) can't be widely implemented because it won't scale well beyond the major arteries. Ad hoc routing with TCP/IP will route around it, if it becomes a block (nature sees it as non-functional and routes around due to Coase's Theorem). The fact that BGP is P2P now, means that it will be impossible to go back to making it centralized.
  4. Regarding intellectual property policing, a decentralized DNS is feasible and will be incentivized by the govt's fascism.
  5. All of this is like the Napster experience-- the more the authorities attacked, the more P2P alternatives popped up and the more people that participated in downloading music for free. The govt is powerless (as usual), but they will hold sway over the large sites and arteries.
  6. "Computer health certificate" is so impossible, I really doubt the competence of the author of the link you provided.
  7. Cloud computing can be P2P, I am working on it: https://goldwetrust.forumotion.com/knowledge-f9/book-ultimate-truth-chapter-6-math-proves-go-forth-multiply-t159-15.htm#3640


Yes we have a battle coming between the State and the individual.

I urge people to come up to speed on the centralization that is dying:

https://goldwetrust.forumotion.com/knowledge-f9/book-ultimate-truth-chapter-6-math-proves-go-forth-multiply-t159-15.htm#3748
https://goldwetrust.forumotion.com/economics-f4/changing-world-order-t32-105.htm#3788

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Re: Computers:

Post  Shelby Sat Dec 25, 2010 12:25 pm

I own the domain "Copute.com".

Cooperative computing, or software that cooperates like legos to build anything anyone wants to accomplish in software.

I visualize a fundamental radical acceleration of the way software can progress by being pure functional and thus re-useable on a finer granularity without refactoring tsuris:

http://www.chrismartenson.com/blog/prediction-things-will-unravel-faster-than-you-think/45297?page=7#comment-91068
http://copute.com/dev/docs/Copute/ref/function.html#Purity
https://goldwetrust.forumotion.com/t159p15-book-ultimate-truth-chapter-6-math-proves-go-forth-multiply#3640

I realized that Linus Torvalds (genius creator of Linux and primary factor in open source phenomenon) said about "address space" separation is fundamentally a call for pure functioning programming of the entire system, including the end user software:

http://www.realworldtech.com/forums/index.cfm?action=detail&id=66630&threadid=66595&roomid=2

Some background on my prior thoughts on that:

http://jasonhommelforum.com/forums/showthread.php?p=55539#post55539
http://jasonhommelforum.com/forums/showthread.php?p=55512#post55512

What I has gelled in my mind now, is that there are 2 related challenges:

1. Resource access control must be as fined grained as the re-usable units of software. Otherwise, the re-use is not bijective and the lossy interoperation will cascade (domino).

2. Re-use must be pure functional, as any state machine at the highest level will be re-useable only the extent that the state-machine is provable and can be factored into the re-use.

Now to turn this into working code with a market...

=======================

Let me explain a key conclusion of what Linus said:

http://www.realworldtech.com/forums/index.cfm?action=detail&id=66630&threadid=66595&roomid=2

Linus Torvalds wrote:Anybody who has ever done distributed programming should
know by now that when one node goes down, often the rest
comes down too. It's not always true (but neither is it
always true that a crash in a kernel driver would bring
the whole system down for a monolithic kernel), but it's
true enough if there is any kind of mutual dependencies,
and coherency issues.

And in an operating system, there are damn few things that
don't have coherency issues. If there weren't any coherency
issues, it wouldn't be in the kernel in the first place!

(In contrast, if you do distributed physics calculations,
and one node goes down, you can usually just re-assign
another node to do the same calculation over again from
the beginning. That is not true if you have a
really distributed system and you didn't even know where
the data was coming from or where it was going).

What he is saying is that fact that most programming languages (other than estoric Haskell, Erlang, etc) create software that is not pure functional (referentially transparent), and thus the coordination of the interoption of these programs is lossy (incoherent, you don't know what the referential state machine dependencies are). This is why he says you can do not better than put this operating system (e.g. Windows, Max X OS, Linux, etc) coordination into a giant spaghetti monolithic kernel. However, the real problem is that the lack of the 2 items I enumerated above.

We need to change the way we write and design software, to make pure functional lego building blocks, with matching granularity of resource access control (permissions). This inherently addresses the security issues too, such as DDoS:

http://jasonhommelforum.com/forums/showthread.php?p=55539#post55539

And the ability to make secure websites secure on the client:

http://www.marketoracle.co.uk/Article22098.html

==================

More from Linus on coherency challenges:

http://www.realworldtech.com/forums/index.cfm?action=detail&id=66656&threadid=66595&roomid=2

Linus Torvalds wrote:>To be fair even in monolithic kernels it is not easy
>to share data given races on SMP/preemption etc.

Nobody claims that threaded programming is easy.

But it is a hell of a lot easier if you can use a lock
and access shared data than if you have to use some
insane distributed algorithm. It's usually many orders
of magnitude easier in monolithic kernels.

Synchronizing some data in a monolithic kernel may
involve using a lock, or special instructions that do
atomic read-modify-write accesses. Doing the same in
a microkernel tends to involve having to set up a whole
protocol for communication between the entities that
need to access that data, or complex replication schemes
(or they just end up mapping it into every process space,
just to avoid the problem. Problem solved, by just
admitting that separate address spaces was a mistake)

>Usually you need to design a locking protocol, care
>about livetime issues etc. It's all not simple there
>neither.

Nobody says kernels are easy. We're talking purely about
the relative costs. And microkernels are harder.
Much harder.

>I always admired how elegant some of parallel Erlang
>programs look and they use message passing
>so you certainly can do some stuff cleanly with
>messages too.

Not efficiently, and not anything complex.

It's easy and clean to use messages if you don't have
any truly shared data modified by both entities.

But the whole point of a kernel tends to be about shared
data and resources. Memory pressure? How do you free
memory when you don't know what people are using it for?

You can try to do an OS in Erlang. Be my guest. I'll be
waiting (and waiting.. The point being - you can't do
a good job).

Let's try an analogy. I'm not sure it's a great analogy,
but whatever:

In the UNIX world, we're very used to the notion of having
many small programs that do one thing, and do it well. And
then connecting those programs with pipes, and solving
often quite complicated problems with simple and independent
building blocks. And this is considered good programming.

That's the microkernel approach. It's undeniably a really
good approach, and it makes it easy to do some complex
things using a few basic building blocks. I'm not arguing
against it at all.

BUT IT IS NOT REALISTIC FOR ALL PROBLEMS. It's a really
really good way to solve certain problems. I use
pipelines all the time myself, and I'm a huge believer.
It works very well indeed, but it doesn't work very well
for everything.

So while UNIX people use pipelines for a lot of important
things, and will absolutely swear by the "many small
and independent tools", there are also situations where
pipelines will not be used.

You wouldn't do a database using a set of pipes, would you?
It's not very efficient, and it's no longer a simple flow
of information. You push structured data around, and you
very much will want to access the database directly (with a
very advanced caching mapping system) because not doing so
would be deadly.

Or, to take another example: you may well use a typesetting
system like LaTeX in a "piped" environment, where one tool
effectively feeds its input to another tool (usually through
a file, but hey, the concept is the same). But it's not
necessarily the model you want for a word processor, where
the communication is much more back-and-forth between the
user and the pieces.

So the "many small tools" model works wonderfully well,
BUT IT ONLY WORKS FOR A NICELY BEHAVED SUBSET OF YOUR
PROBLEM SPACE. When it works, it's absolutely the right
way to do things, since you can re-use components. But
when it doesn't work, it is just a very inconvenient
model, and while it's certainly always possible to
do anything in that model (set up bi-directional sockets
between many different parts), you'd have to be crazy to
do it.

And that's a microkernel. The model works very well for
some things. And then it totally breaks down for others.

Linus is correct that your I/O and data structures (i.e. you state-machine) can not be distributed. Monolithic kernels attempt to filter the aliasing error of the lossy coherency.

What I am saying is that we need a computer language (my proposed Copute) that enables us to delineate between the pure functional and the stateful portions of our software, so that we can re-use the former in other software and only have to focus our coherency error challenges to the known stateful portions.

For example, pure functional code can be interrupted at any time without causing any coherency (race) issues. As I said, the stateful portions should be high-level (the outermost functions of our software).

==================

Linus on security:

http://www.realworldtech.com/forums/index.cfm?action=detail&id=67181&threadid=66595&roomid=2

Linus Torvalds wrote:For the last year or so, the new magic world has been
"security". It appears as totally bogus as the "ease of
maintenance" tripe was (and for largely the same reasons:
security bugs are often about the interactions
between two subsystems, and the harder it is to write
and maintain, the harder it is to secure).

I'm sure that ten years from now, it will be something else.

There's always an excuse.

Linus is again correct. Security is inherent in the pure functional software, but the security holes are in the stateful portions (where we have coherency, aka interoption, challenge). That is why we need a language for model the separation of two software paradigms in the same system.

Linus Torvalds wrote:The whole "make small independent modules" thing just sounds
like manna from heaven when you're faced with creating an
OS, and you realize how daunting a task that is. At
that point, you can either sit back and enjoy the ride
(which I did - partly because I didn't initially really
realize how daunting it would be), or you can seek mental
solace in an idea that makes it sound easier than it is.

This comes for free in the pure functional portions, as the language enforces that the software is coherency agnostic. I am trying to find the post where Linus admits such possibility for the future. Ah here is one post where he admits to attack the coherency issues from the language layer:

http://www.realworldtech.com/forums/index.cfm?action=detail&id=67198&threadid=66595&roomid=2

And here he says it again more generally:

http://www.realworldtech.com/forums/index.cfm?action=detail&id=67213&threadid=66595&roomid=2

Linus Torvalds wrote:As I already alluded to (and expanded on in my reply to
myself), I actually think using "designer languages" may
be a much better solution to the modularity problem than
the microkernel approach. The C model is very much one
where you have to do all the modularity by hand (which
Linux does, btw - don't get me wrong).

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Economic model for Copute ($50 x 7 billion = $350 billion per year)

Post  Shelby Mon Dec 27, 2010 4:08 pm

http://esr.ibiblio.org/?p=2813&cpage=5#comment-290515

Shelby wrote:>>This really resolves to who makes money in the ecosystem, the app vendors (iOS) or the handset makers (Android).
>
> You came in late, so maybe you haven’t seen my analysis of Google’s grand strategy. You should probably read this and this for starters.

Check my logic please?

Some users are unwilling to pay $50+ for a new OS every time they buy a new computer, unless it is hidden in the cost of the new computer, because they are not getting any significant innovation. Evidence the number of users preferring to run Windows XP SP2, especially in the developing world as it is much easier to steal than Vista or Windows 7-- just patch it and turn off Windows Update.

Closed-source code strangles itself because it fights against innovation of itself. The vested interest is in the past, not in the optimum future.

>> You don’t think in the future that people will have similar amounts of money (relative to the cost of the unit) tied up into their phones as they do for their computers?
>
> No, because I have yet to buy an app. So far, everything I’ve been able to identify that I wanted has been available for free.

Let me peer into a possible future.

Selling many little innovations separately doesn't scale, i.e. users can't be bothered with the hassle of micro-payments, e.g. imagine making a $0.0001 fractional cents payment decision on every URL clicked, implying Apple's AppStore isn't capitalizing on but a fraction of potential innovation. And large open source projects don't scale revenue sharing out to multitudes of random contributions (thus are not maximizing potential innovation). Imagine instead $12 a year (total for all the apps they use) from each of 100s of millions of users of a computer or smart phone, with that revenue distributed proportionally to every contributor, given some unifying (open source!) paradigm for monetizing those myriad of innovations. Imagine that increasing to $50+ per year, as competition between applications forced unification into that paradigm, thus increasing the value to the user. The userbase and the value (cost) per user would be both be increasing. Imagine that paradigm won because of an (key fundamental computing theory) economic benefit of a "designer" programming language that rendered existing languages and operating systems (including Linux) uncompetitive, precisely because it enabled fine-grained contribution scaling.

And if you think all 7 billion won't have a computer soon, read this:

http://esr.ibiblio.org/?p=2835
http://tech.fortune.cnn.com/2010/12/22/2011-will-be-the-year-android-explodes/

=========================================
Economic Model For Sharing Revenue With Contributors
=========================================

Copute will profile a statistically accurate sample of the actual CPU time consumed by each code contribution. The gross revenue can be shared by proportion CPU time. Additionally, this information will also drive software developers (who build programs from these code bases) to select the code that has the best performance (least CPU time usage), which will drive competition among contributors. Since these will be pure functional code contributions (per Copute's unique paradigm shift breakthrough) then a contributor can improve upon an existing code contribution (and congruency is verifiable due to the pure functional referentially transparency), and if for example they reduce the CPU time by say 90%, then they receive 90% of the revenue generated by their contribution, and the prior code contributor will receive 10%. The competitor will be able to market their contribution to software developers, by attaching theirs to the former contribution and software developers can choose (we might even be able to automate unit testing to verify congruency of outputs between two contributions).

The bottom line is the user will not see any of this, they will just see an exponential increase in the rate of software innovation.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Face recognition software and Facebook

Post  Shelby Tue Dec 28, 2010 6:36 pm

Put your photo online, and the computer can identify you in any video or other photo. Facebook is able to automate the labeling of names in photos now.

And here is what the police will do with that technology:

http://www.marketoracle.co.uk/Article25179.html

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Copute Milestone: fixed 6 known ambiguities in other major languages

Post  Shelby Mon Jan 03, 2011 5:02 pm

Non-Context Free Grammars cause human errors in programming. This is fundamental and has been causing me (and other programmers) to repeat silly bugs for 25 years (Murphy's Law applies even when one is experienced):

http://copute.com/dev/docs/Copute/ref/llk.html#Context_Free_Grammar

Copute removes these known Context-Free ambiguities.

1. Dangling else.
Note Python is not entirely Context Free.

2. Ambiguity between return and following expression.

3. Ambiguity between prefix and postfix unary ++ and --

4. Ambiguity between infix and unary - and +

The LL(k) grammar compiler tells me about all such ambiguities, but it is up to me to design the above fixes.

Copute also removes these known terminal semantic ambiguities, which are caused by giving terminals (e.g. '(' and '+') different semantic meaning in different contexts where two of the contexts can occur simultaneously:

5. Ambiguity between grouping and function call.

6. Ambiguity between number add and string concatenation operator.

If you know of any more terminal semantic ambiguities in other major languages, please let me know as soon as possible before I finalize the Copute grammar.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Python's conflation of indenting and grammar

Post  Shelby Tue Jan 04, 2011 3:42 pm

Now on to Python, a computer language created in the 1990s, that is now gaining some momentum and popularity...

Shelby at jasonhommelforum.com wrote:...People rave about Python's conflation of indenting and grammar. Geez that is a step backwards to 30 years ago. Copute does it mathematically correct (all conflation removed from the grammar -> semantic translation layer) as per the prior post...

Haskell also conflates layout and grammar.

Here is how Python solved the nested if-else ambiguity, by conflating indenting and grammar, by making indenting a syntactical unit:

Code:
if foo:
    if bar:
        print 'Inner True'
    else:
        print 'Inner False'
else:
    print 'Outer False'

One problem with that is it doesn't make the following unambiguous, therefore the following is illegal in Python:

Code:
if test1: if test2: print x

Copute can allow the above as follows, and even though Python would not be able to allow the above when the "else:" is present, it should be able to allow it as above when "else:" is not present, but does neither. Note that since Python is using indenting to declare statement group (aka "block"), i.e. makes the newline+indent the block start and newline+outdent the block end (whereas Copute uses braces to delimit blocks), then it can not do Copute's context free solution:

Code:
if test if test2 print x
if test {if test2 print x} else print y

Copute is not conflated with indenting, so the following works and has equivalent meaning in Copute also:

Code:
if test
  if test2 print x
if test {
  if test2 print x
} else
  print y

Thus the programmer's choice of visual layout is not constrained by grammar (layout is not conflated with grammar), which enables a code renderer to re-layout (aka reflow) code automatically, which for example would be necessary to make very wide lines wrap on a small smart-phone screen. Note I envision a coming smart-phone that will have a second foldout screen that is juxtaposed against the main screen making your iPhone screen twice as wide in the narrowest direction. So the width of text displayed will be different depending if you are displaying on your desktop wide screen, your iPhone narrow, your iPhone widened with foldout, or your iPad.

However, I think Python can be automatically reflowed too, because indenting is required to create a new block, thus lines that are too long can be wrapped to the next line at same indent level, without changing the meaning of the code. And single line if-else constructs can automatically wrapped to new lines with indenting. Also these automatic mappings can be done in the inverse, to accommodate wider screens.

Thus, I am leaning towards adopting Python's conflation of layout and grammar, since it appears to be invertible and bijective, and it adds a slight readability and slightly lower verbosity advantage (see below for examples). But if I do, I would still require braces for the single line if-else case (see why above), and allow the option for braces for the single line 'if' and 'while' (there is no 'for' in Copute) cases (see why below). And note that adopting this might change how I have decided to resolve the 6 ambiguities in other languages that I written about in prior post.

Tangentially, notice that Python requires colons after statements and semicolons between expressions, which Copute does not:

Code:
if x < y < z: print x; print y; print z

The advantage for Copute is you don't have to remember to insert those. In Python those semicolons have a higher grouping precedence than the colon, thus the above is equivalent to the following in Copute:

Code:
if x < y < z {print x print y print z}

I don't see any advantage in verbosity for Python, and it is a heck of a lot more clear in Copute what the grouping is.

And Python's method is extremely subject to human error, because just one of those tiny semicolons missing (accidentally, hey I can barely see those tiny things), screws up the entire line:

Code:
if x < y < z: print x; print y print z

This is equivalent to following in Copute.

Code:
if x < y < z {print x print y} print z

The following ambiguity exists in both Python and Copute:

Code:
if x < y < z: print x print y print z # Python
Code:
if x < y < z print x print y print z // Copute

In both cases, it if equivalent to in Copute:

Code:
if x < y < z {print x} print y print z

Here is the justification from Guido, the creator of Python:

Any individual creation has its ideosyncracies,

ideosyncracies means idiosyncrasies

and occasionally its creator has to justify these. Perhaps Python's most controversial feature is its use of indentation for statement grouping, which derives directly from ABC. It is one of the language's features that is dearest to my heart. It makes Python code more readable in two ways. First, the use of indentation reduces visual clutter and makes programs shorter, thus reducing the attention span needed to take in a basic unit of code.

As you can see in the examples above, Python is not significantly shorter than Copute.

Second, it allows the programmer less freedom in formatting, thereby enabling a more uniform style, which makes it easier to read someone else's code. (Compare, for instance, the three or four different conventions for the placement of braces in C, each with strong proponents.)

As I wrote above, I am thinking that Python's use of the newline and indent as syntactical units, can be automatically reflowed, just as using braces can.

Whereas, if the code could not be reflowed for layout into different screen widths and lengths, then that is loss of freedom is propagating into domino tsuris. For example, say my iPhone screen is too narrow to accommodate the length of the lines in some Python code, then the lines can not be wrapped to fit inside my narrow screen, because this would change the intended meaning (semantics) of the code. I will be forced to scroll horizontally and scroll vertically, which is extremely difficult for a human (try it, read a paper book with piece of cardboard and a hole cut in the shape of your iPhone screen).

And the claimed verbosity and lack of homogeneity disadvantage for Copute's bracing (for blocks) is very minor, if not insignificantly pendantic.

Other than the dangling if-else case, bracing is not needed in Copute when there isn't more than one line in the block, e.g.

Code:
while x < y
  print x
print end

And thus only 2 extra characters (the { and }) when there are 2+ lines in a block, e.g.

Code:
while x < y {
  print y
  print x
}
print end

The various optional layouts for bracing in Copute include the following examples:

Code:
while x < y {
  print y
  print x
}
while x < y
{
  print y
  print x
}
while x < y
  {
  print y
  print x
  }
if x < y {
  print y
  print x
} else print

Whereas in Python always:

Code:
while x < y:
  print y
  print x
if x < y:
  print y
  print x
else print

But in theory, Copute code can always be automatically reflowed to the preferred style of the reader, so thus compare the above Python to this in Copute:

Code:
while x < y {
  print y
  print x
}
if x < y {
  print y
  print x
} else print

Or if you prefer always:

Code:
while x < y {
  print y
  print x }
if x < y {
  print y
  print x
} else print

That slight readability advantage for Python, becomes more and more insignificant as the number of lines in a block increases and, if it comes at the cost of having layouts which can not be reflowed, is a loss of freedom both for the programmer who prefers a different layout style and for rerendering to differing window sizes.

This emphasis on readability is no accident. As an object-oriented language, Python aims to encourage the creation of reusable code.

What a joke, but the sad part is that Guido did not (at least at that time) even realize that object-oriented programming with dynamic typing can never be referentially transparent (aka context-free) and thus is not reusable:

http://esr.ibiblio.org/?p=2491#comment-276772 ("Jocelyn" is me, read all my comments on page)
http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method
http://copute.com/dev/docs/Copute/ref/function.html#Purity

Python is always a dynamically typed language, so it is hopeless for wide-scale (meaning positive scaling law, listen at 4:20 and 15:00) reusability (aka compositions, mashups).

Even if we all wrote perfect documentation all of the time, code can hardly be considered reusable if it's not readable.

Guido did not understand what drives reusability. It is all about eliminating external context dependencies (aka futures contracts).

The whole point is that functions should be context-free (i.e. only depend on their inputs), and made as small as possible so they can be reused by other functions.

So then the whole point of (e.g. Copute's) reusability is we don't want to force others to read the code inside a function just to know what it is doing. The statically typed (even if inferred typing) function interfaces (for functions that do not access external context, nor internally store context with closures, generators or 'static' keyword) is self-documenting (even inferred types can be displayed by the compiler), and thus don't force everyone to load all the code (inside the functions) of the entire world in their head. Domino gridlock is to be avoided, not promoted.

The misunderstanding of Guido is promulgated by many in the open source movement, who latched on to the rallying cry, "more eyes, mean shallower bugs". See my first link above, where I (alias "Jocelyn") caught Eric Raymond (the self-proclaimed 160 IQ spokesman of the open source movement who coined the above phrase) in a logic error on this. Btw, I was forced to use "Jocelyn", because Eric Raymond banned "Shelby" (and lately he banned "Jocelyn"). I will get the last word in the marketplace!

More eyes on the referentially transparent function interfaces is what we need, not huge monolithic open source code bases that can't be easily forked or understood by any contributors other than the core insiders.

I will change the world. That is why I gave my Christmas to this Copute. (There was a beautiful lady that I let down this holidays!)

Many of Python's features, in addition to its use of indentation, conspire to make Python code highly readable. This reflects the philosophy of ABC, which was intended to teach programming in its purest form, and therefore placed a high value on clarity.

Again what an ironic joke, that Guido didn't realize that Python can never be pure nor clear, because it can't be referentially transparent.

In this missive, he is extrapolating that making bark clear, allows one to see over the forest.

Readability is often enhanced by reducing unnecessary variability. When possible, there's a single, obvious way to code a particular construct. This reduces the number of choices facing the programmer who is writing the code, and increases the chance that will appear familiar to a second programmer reading it.

Again he failed to understand that context-freedom is necessary to prevent aliasing. For example, if conflating layout (indenting) and grammar is not invertible and bijective, it may cause the aliasing error of not being able to view code in narrow screens (although afaics so far, Python's conflation of layout and grammar is invertible and bijective).

Based on recent essays from Guido, I tend to think he may still not understand.

Look I don't like when people call me out and I don't like to call others out, except when I try to help people and they ban me. I originally did not want to create Copute. I was making suggestions on how to improve HaXe (which is closest so far to what I want), and the creator of HaXe, Nicolas Cannasse banned me.

I am not trying to embarrass Guido, but I want to make it clear that I have real reasons for being forced to create my own language. I wanted somebody else to do it, but after 25 years, I got tired of waiting and pleading and being banned.

Pros and Cons of Python's Indenting

I will discuss these in order of least important and least controversial first.

1. NEUTRAL: Python contains no do-while, only has 'while'[ cited ]. This appears to be because the Python developers did not want to follow the (or the indention block that follows the) 'do' with a 'while' that ends with no colon (colon requires a following expression or block) for ideological consistency in Python's look-and-feel[ cited ]. They also considered a form with 'while:' (including the colon) that would have an optional following expression or block, but no where else in Python is a colon followed by "nothing"[ cited ] (i.e. "nothing" would I guess be a new line at same indent or outdented). Afaics, it would be possible to design Copute's grammar such that indention changes within expressions are consumed but ignored. And thus I don't see why the outdent to the 'while' couldn't be consumed within a compound do-while statement. The issue for Python was not context-free grammar related, rather ideological consistency. I think it would also be possible to have Copute implement that proposed optional block following 'while'[ cited ], which is known as "loop and a half" pattern[ cited ], but in Copute it would introduce the same ambiguity as dangling expression on 'return' (so semicolon is always required). In any case, do-while (even the complete "loop and a half" form) is not much less verbose than an equivalent construct using 'while-true' and 'if-break'[ cited ], so it is difficult to make a strong case for augmenting the traditional C-like 'do-while'.

2. NEUTRAL: Deeply nested (i.e. indented) code blocks can not be outdented, thus forcing horizontal scrolling[ cited ]. Outdenting such a case forces horizontal scrolling too. Besides the deep nested cases are usually more elegantly handled (and potentially more reusable too) with sub-functions and potentially recursion or standard list operations such a 'map', 'fold', etc.

3. CON: Python code could be rendered unusable when posted on a forum or web page that removes whitespace[ cited ]. HTML collapses juxtaposed whitespace into a single space by default. Although this can be classified as user error (i.e. not enclosing within <pre></pre> tags), languages that do not conflate indenting and grammar wouldn't be subject to this user error (Murphy's Law).

4. PRO: Impossible to obfuscate the meaning of a program by using bogus indentations[ cited ]. Note this must be combined with a line continuation token (e.g. '...' in Python), otherwise this would be simultaneously a PRO and a CON[ cited ]. Obfuscation means that one can imply in the indenting that the program is doing a different meaning, than what the program's grammar sees. I had given several examples of that within my prior post about correcting 6 ambiguities in other languages. Note that some people see obfuscation as a feature, i.e. to distribute JavaScript code that is so unreadable (i.e. entire program squashed into one line, all unnecessary spaces removed, etc) that it is not practical to steal the code. However, other forms of obfuscation, e.g. replacing identifiers with randomized words, is not prevented. And whitespace obfuscation is invertible and bijective to any preferred rendering of the grammar, so in theory someone could write an automated code restoration tool, thus the value of whitespace obfuscation is dubious.

5. CON: Tabs and spaces can not be safely mixed[ cited ], because the interpretation of the width of the tab character is not carried with all possible file types that can contain tabs. Thus if tabs and spaces are simultaneously allowed for indenting in same file (not allowed of Python 3), then indenting alignments (and thus meaning of the program) can be lost as code is copied around or opened in different tools. But languages which don't conflate indenting and grammar, have a similar but less serious (because only reader's meaning changes, not the actual execution meaning) problem with loss of indenting alignment when mixing tabs and spaces, which is they don't have #4 PRO above. Since Python 3 prevents mixing tabs and spaces for indenting, and since other languages also suffer from such mixing (thus they should also prevent it), then I would like to rate this as a NEUTRAL, but there is the problem that other tools can silently introduce tabs and user is faced with tsuris and they don't even know why or how to fix it[ cited ]. Also I want to make a related but slightly orthogonal point, that any tabs for indenting is bad, because indenting alignments can change when tab widths do. Thus I would advocate that tabs are never allowed for indenting, and this is even more critical for a language (e.g. Python) which conflates indentation and grammar and thus relies on indentation alignment for execution semantics.

6. CON: Having accidental superfluous space can silently change the meaning of code which follows[ cited ], e.g.

Code:
if x:
.....print x

....print x
....if y:
........print y
(I was forced to use periods instead of spaces, because this forum won't render the single space difference otherwise)

Can you see that the first "print x" is one space indented more then the rest of the lines that follow? But will you notice it when it is buried in a page of 1000 lines of dense code? So the meaning of that code is actually as follows:

Code:
if x:
    print x

print x
if y:
    print y

Consider this example:

Code:
def myfunction(foo, bar):
....foo.boing()
...for i in bar.fizzle(foo):
......baz = i**2
....foo.wibble(baz)
....return foo, baz

It actually means:

Code:
def myfunction(foo, bar):
  foo.boing()
for i in bar.fizzle(foo):
  baz = i**2
foo.wibble(baz)
return foo, baz

Conclusion and Decision

Even though I had intuitive theoretical misgivings about conflating whitespace and grammar before I wrote this post, I mildly appreciated Python's goal of consistent indenting of blocks without the pollution of braces. So I opened my mind to consider the pros and cons.

Upon evaluating the pros and cons, the human error (Murphy's Law) cons introduced are overwhelming, even the one pro case requires polluting the page with line continuation tokens ('...'). However, what really sealed my decision to not emulate Python's conflation, is I realized just now, that if Copute's grammar is well-defined with braces, then the user's editor can hide the braces when the indenting agrees, and/or the editor can enforce the indenting, not show the braces, and save the file with the appropriate braces. Thus all the cons of Python are avoided, the one pro is also achieved, and all the stated benefits of Guido's goal are achieved entirely.

So looks like I improved upon Guido's vision. :wink:

I seem to be serially good at that sort of "math visualization" success, perhaps that is why all the IQ tests say I am. I mean I would just like a little bit of mutual respect from my peers (as in not banning people who disagree with you, because they might be correct and you can't see it), not bragging (well maybe I am a little, but Copute is still vaporware).

=======
ADD: looks like Guido had a similar realization 13 years ago, but didn't act on it:


Last edited by Shelby on Sun Jul 24, 2011 4:25 am; edited 1 time in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Benefits of referential transparency virally spreading into rest of design of Copute

Post  Shelby Thu Jan 06, 2011 11:12 am

1. Parametrized Class Inheritance

Here is a problem that Java is struggling with and even the experts can't seem to solve it succinctly, and Wikipedia can't even describe it coherently, but lookie here at Copute:

http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Inheritance

Covariant assignment, e.g. T<Super> = T<Sub>, is allowed when the type T is read-only [...] Contravariant assignment, e.g. T<Sub> = T<Super>, is also allowed when T is write-only.

So that means for a pure function in Copute, there will absolutely no tsuris with parametrized classes that employ one level deep of inheritance. And no covariant inheritance tsuris with parametrized classes ever (which is the main type of inheritance that programmers expect intuitively).


2. Inferred Typing and Parametrized Functions

http://code.google.com/p/copute/issues/detail?id=27

Function call expressions, which do not explicitly list the inferred types in their type parameter list, implicitly create polymorphic (i.e. potentially more than one) instantiations of a referentially transparent (aka pure) function declaration that does not contain 'typeof T', where T is an inferred type.

Whereas, function call expressions, which do not explicitly list the inferred types in their type parameter list, may only create one instantiation of a function declaration that either is referentially opaque (aka impure) or contains 'typeof T', where T is an inferred type. That implicit instantiation has the inferred type(s) of the first function call expression encountered by the compiler.

The point is that since referentially transparent (aka pure) functions have no side effects (not even internally retained state), there is no possible impact on the caller's state machine whether there one or more than one implicit instantiations of the function being called. Whereas, for example, given a function that saves an internal state (e.g. counter), then implicitly calling multiple instantiations would impact the caller's state machine differently than calling one instantiation. The ambiguity is because it is implicit, the programmer may not even be aware the code is calling more than one instantiation.

What this means is that we never have to declare argument and return types for pure functions and we can still get complete static typing benefits without tsuris. We only need to declare any constraints (if any) on the argument and return types for pure functions, required by those functions.

Wow, I am seeing Copute gain more and more of the power of Haskell while still looking almost exactly like JavaScript (the most popular language in the world, because it is the script language in every web browser).


Last edited by Shelby on Sat Feb 19, 2011 1:32 pm; edited 1 time in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Milestone: ah the holy grail of typed array size

Post  Shelby Mon Jan 10, 2011 2:15 am


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Covariant substitution is always due to immutability

Post  Shelby Sat Jan 15, 2011 5:19 am

> Shelby wrote:
>> Seems that some (many?) don't realize that what makes Haskell covariant
>> for parametrized subtyping is the referential transparency, or am I
>> mistaken?
>>
>> http://copute.com/dev/docs/Copute/ref/class.html#Parametrized_Inheritance
>> http://lambda-the-ultimate.org/node/735#comment-63943
>
> Haskell does not have covariance/contravariance because Haskell does not
> have sub/super typing.

I think I was correct (but maybe not?). Let me explain my big picture analysis.

I had written circa late 2009, "Base classes can be ELIMINATED with Interfaces":

http://www.haskell.org/pipermail/haskell-cafe/2009-November/068432.html

Haskell has implied "sub/super-typing" when the functions in a type are contained with the functions of another type, but types are only groups of functions (aka named interfaces) and never bundle/bind mutable data. And this hierarchal grouping is not required, as types may include any function, even those from any other types without requirement to include all the functions of another type.

Instead of bundling mutable data, where the functions of a type have overloads, or there exist conversion functions to those overloads, which input the data type (possibly a tuple), the data type is that type. Thus a data type can have multiple types and these are not restricted to being hierarchal.

It is critical to covariant substitution in Haskell that the data type be immutable, i.e. referentially transparent, meaning that the types (groups of functions) defined in Haskell can not modify the input data type. Imagine an array of numbers (floats, ints, fixed point, etc) and we want to define a type in Haskell (function) that accepts an array of ints and returns an array of numbers (perhaps this function adds an element, e.g. push()). The fact that in Haskell referential transparency insures that the returned array can never be referred to else where as an array of int (after potentially a non-int has been added to it), is why that function is allowed, i.e. covariant substitution is allowed because the input is always read-only due to referential transparency. The referential transparency is forcing the output to be a copy of the input.

Note also that in Haskell, data (which are always immutable in Haskell) are just functions that always return the same value.

So thus Haskell achieves this covariant substitution via all-or-nothing approach (caveat below) to referential transparency (aka purity).

The key to emulating Haskell's inheritance granularity, is simply to declare very granular interfaces in Copute, e.g. one added function per interface optimally, and do not include non-function member variables in those interfaces:

http://copute.com/dev/docs/Copute/ref/class.html#Inheritance

For example, instead of requiring the type Object for a function that just wants to input all types that have toString(), make Object inherit from an IToString interface that contains toString().

So thus we can see that Haskell's approach is not superior in fundamental power, rather it forces a "purer" design which may not be optimal in all cases. And furthermore, Haskell forces purity everywhere (well not entirely true, e.g. seq and state monad), and Copute allows both with (in theory) optimal granularity for the programmer to compose with. In an ideal world the programmer wants to strive for maximum purity (immutable data) and minimum referential opacity (minimize the state machine). But the real world has a state machine, the Dunbar number is evidence of that, as well as numerous fundamental theorems.

I think separation is the goal, and I observe Haskell is heavily tilted to pure code, at the cost of intuitive integration with the impure. Perhaps that is just a matter of personal preference, and my lack of experience with Haskell. In any case, I think many programmers will share my preference, as evident by Haskell's slow adoption for commercial applications. And lazy evaluation has a big cost (one which I submitted an idea for a solution).


Last edited by Shelby on Sat Feb 19, 2011 2:14 pm; edited 3 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Programmers are getting miseducated about programming language grammar ambiguities

Post  Shelby Sat Jan 15, 2011 10:34 am

I just commented publicly ad nauseum on this:

http://en.wikipedia.org/w/index.php?title=Recursive_descent_parser&oldid=407998337#Shortcomings

In the general case, recursive descent parsers are not limited to the Context-free grammar and thus do no global search for ambiguities in the LL(k) parsing First_k and Follow_k sets. Thus ambiguities are not known until run-time if and until the input triggers them. Such ambiguities where the recursive descent parser defaults (perhaps unknown to the grammar designer) to one of the possible ambiguous paths, thus results in semantic confusion (aliasing) in the use of the language, and leads to bugs by users of ambiguous programming languages which are not reported at compile-time, and which are introduced not by human error, but by ambiguous grammar. The only solution which eliminates these bugs, is to remove the ambiguities and use a Context-free grammar.

http://en.wikipedia.org/w/index.php?title=Parser_combinator&oldid=407998210#Shortcomings_and_solutions

Parser combinators, like all recursive descent parsers, are not limited to the Context-free grammar and thus do no global search for ambiguities in the LL(k) parsing First_k and Follow_k sets. Thus ambiguities are not known until run-time if and until the input triggers them. Such ambiguities where the recursive descent parser defaults (perhaps unknown to the grammar designer) to one of the possible ambiguous paths, thus results in semantic confusion (aliasing) in the use of the language, and leads to bugs by users of ambiguous programming languages which are not reported at compile-time, and which are introduced not by human error, but by ambiguous grammar. The only solution which eliminates these bugs, is to remove the ambiguities and use a Context-free grammar.

http://www.codecommit.com/blog/scala/the-magic-behind-parser-combinators/comment-page-1#comment-5252

One major problem with recursive descent algorithms, such as parser combinators, is they do not do a LL(k) global search for First_k and Follow_k sets ambiguities at parser generation time. You won't actually know you have an ambiguity if and until you encounter it in the input at runtime. This is quite critical for developing a language:

http://members.cox.net/slkpg/documentation.html#SLK_FAQ

In the development of Copute's grammar, I found critical ambiguities that would have not been evident if I had gone with parser combinators (aka recursive descent algorithms). The tsuris I encountered in resolving ambiguities was due to incorrect grammar.

Also they will never be as faster, because for the k lookahead conflicts, they follow unnecessary paths, because the global optimization (look ahead tables) was not done.

I don't see what is the benefit? Perhaps it is just that the LL(k) parser generation tools are not written in good functional programming style in modern languages, thus making them difficult to adapt to and bootstrap in your new language.

Also just because the time spent of compilation semantic analysis of the AST is often much greater than the time spent in the parser stage, the speed of the parser stage is very important for JIT
compilation/interpreter.

Also I had looked at the recursive descent algorithms option and rejected it objectively.

The speedup in development time from not finding ambiguities will not make ambiguities disappear, because the grammar would still be ambiguous otherwise (backtracing makes a grammar ambiguous except in rare cases), and ambiguity results in semantic inconsistencies in the language, which get borne out as needless programming bugs that waste the hours of every programmer using the language. I have numerous examples of resolved issues at my Google code tracker for Copute.

So that speed up in development effort incurs a cost that is going to paid (probably more excruciatingly) down the line.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 4 Empty Re: Computers:

Post  Sponsored content


Sponsored content


Back to top Go down

Page 4 of 11 Previous  1, 2, 3, 4, 5 ... 9, 10, 11  Next

Back to top


 
Permissions in this forum:
You cannot reply to topics in this forum