GoldWeTrust.com
Would you like to react to this message? Create an account in a few clicks or log in to continue.

Computers:

5 posters

Page 8 of 11 Previous  1, 2, 3 ... 7, 8, 9, 10, 11  Next

Go down

Computers: - Page 9 Empty Monad/Comonad duality, also lazy/eager duality

Post  Shelby Wed Jul 27, 2011 3:17 pm

http://blog.sigfpe.com/2006/06/monads-kleisli-arrows-comonads-and.html?showComment=1311779802514#c3359419049328160234

Shelby Moore III wrote:
Monad is the model for any parametric type that we know the generative structure of, so we can compose functions on lifted outputs, because the type knows how to lift (i.e. construct, generate, 'unit' or 'return') instances of its type parameter to its structure.

Comonad is the model for any parametric type that we don't know how to generate its structure, but we can observe instances of the type parameter its structure as they occur. We will only know its final structure when it is destructed, and observation ceases. We can't lift instances of its type parameter to its structure, so we can't compose functions on outputs. Instead, we can compose functions with lifted inputs (and optionally outputs, i.e. map on observations), because the type has observations.

Conceptually monad vs. comand duality is related to the duality of induction vs. coinduction, and initial vs. final (least vs. greatest) fixpoint, because we can generate structure for a type that has an initiality, but we can only observe structure until we reach a finality.

Induction and Co-induction
Initiality and Finality
Wikipedia Coinduction

I had visited this blog page before (and not completely grasped it), then I read this page again trying conceptualize the sum vs. products duality for eager vs. lazy evaluation.

Perhaps I am in error, but it appears that with lazy evaluation and corecursion, monad can be used instead of comonad, e.g. isn't it true a stream can be abstracted by a monadic list in haskell?

So dually, am I correct to interpret that laziness isn't necessary for modeling compositionality of coinductive types, when there is comonad in the pure (referential transparent) part where the composition is?

Edit#1: The word "compositionality" can refer to the principle that the meaning of the terms of the denotational semantics, e.g. the Comonad model, should depend only on the meaning of the fragments of the syntax it employs, i.e. the subterms. What I understand from a research paper[1], is that the compositional degrees-of-freedom of the higher-level language created by the higher-level denotational semantics, is dependent on the "free variables" in the compositionality fragments. Thus the compositionality can be affected by the evaluation order and other operational semantics. Due to the Halting Problem, where the lower-level semantics is Turing complete, the subterms will never be 100% context-free. I have proposed that when the higher-level semantics unifies lower-level concepts, compositionality is advanced. Please see the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics at http://copute.com for more explanation.

[1] Declarative Continuations and Categorical Duality, Filinski, section 1.4.1, The problem of direct semantics

Edit#2: The composition of functions which do not input a comonad, with those impure ones that do, can be performed with the State monad. The comonad method, Comonad[T] -> T is impure (see explanation in the section Skeptical? -> Higher-Level -> Degrees-of-Freedom -> Formal Models -> Denotational Semantics -> Category Theory at http://copute.com), so we must thread it across functions which might be pure, using the State monad. Thus the answer to my last question is "correct", we can purely compose any pure functions which input a Comonad, because the comonad method, (Comonad[T] -> A) -> Comonad[T] -> Comonad[A] is pure if is the input function, Comonad[T] -> A. Also the answer to my other question is "incorrect", because a monad can abstract a comonad, but only for the history of observations (see explanation at same section) because a monad has no interface for creating a new observation.]

Shelby Moore III wrote:
Followup to the two questions in my prior comment.

Monad can't abstract a comonad, because it has no method, m a -> a, for creating a new observation. A monad can abstract the history of prior observations. Afaics, for a language with multiple inheritance, a subtype of comonad could also be a subtype of monad, thus providing a monadic interface to the history of observations. This is possible because the comonad observation factory method, m a -> a, is impure (the state of the comonad blackbox changes when history is created from it).

Composition of functions, m a -> b, which input a comonad is pure (i.e. no side-effects, referentially transparent, declarative not imperative) where those functions are pure (e.g. they do not invoke m a -> a to create a new observation). In short, the method (m a -> b) -> m a -> m b is pure if m a -> b is.


Last edited by Shelby on Sat Jul 30, 2011 1:01 pm; edited 7 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Call-by-need memoizes arguments, not functions

Post  Shelby Sun Jul 31, 2011 9:07 am

http://augustss.blogspot.com/2011/04/ugly-memoization-heres-problem-that-i.html#3700840423100518476

Shelby Moore III wrote:
@francisco: Haskell's call-by-need (lazy evaluation) memoizes function arguments, but not functions.

=====verbose explanation======

The arguments to a function are thunked, meaning the arguments get evaluated only once, and only when they are needed inside the function. This is not the same as checking if a function call was previously called with the same arguments.

If the argument is a function, the thunk will call it once without checking if that function had been called else where with the same arguments.

Thunks are conceptually similar to parameterless anonymous functions with a closure on the argument, a boolean, and a variable to store the result of the argument evaluation. Thus thunks incur no lookup costs, because they are parameterless. The cost of the thunk is the check on the boolean.

Thunks give the same amount of memoization as call-by-value (which doesn't use thunks). Neither call-by-need nor call-by-value memoize function calls. Rather both do not evaluate the same argument more than once. Call-by-need delays that evaluation with a thunk until the argument is first used within the function.

Apologies for being so verbose, but Google didn't find an explanation that made this clear, so I figured I would cover all the angles in this comment.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Eager vs. Lazy evaluation

Post  Shelby Sun Jul 31, 2011 10:22 pm

See also:

https://goldwetrust.forumotion.com/t112p165-computers#4430

========================================

http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323

Shelby Moore III wrote:
Appreciated this article. Some points:

1. Lazy also has latency indeterminism (relative to the imperative world, e.g. IO monad).

2. For a compiler strategy that dynamically subdivided map for parallel execution on multiple cores requires it not be lazy.

3. any = or . map is not "wrong" for eager (strict) evaluation when there is referential transparency. It is slower in sequential time, but maybe faster in parallel execution.

4. Wikipedia says that for Big O notation, 0(n) is faster than O(n log n) = O(log n!). @Lennart, I think you meant to say that O(n) is faster than O(n log n).

5. Given that laziness causes space and latency indeterminism, if the main reason to use lazy is to avoid the performance hit for conjunctive functional composition over functors, then only functions which output applicable functors need apply laziness. As @martin (Odersky) suggested, provide lazy and eager versions of these functions. Thus eager by default with optional lazy annotations would be preferred.

Principle and 80/20 rule. 80+% of the programmers in world are not likely to grasp debugging lazy space leaks. It will only take one really difficult one to waste a work-week, and that will be the end of it. And what is the big payoff, especially with the parallelism freight train bearing down? Someone claimed that perhaps the biggest advance in mainstream languages since C, was GC (perhaps Java's main improvement over C++). Thus, if the typical programmer couldn't do ref counting without creating cycles, I don't think they will ever grasp lazy space and latency indeterminism. I am approaching this from wanting a mainstream language which can replace Java for statically typed.

Am I correct to say?

RT eager code evaluated as RT lazy could exhibit previously unseen space and latency issues.

RT lazy code evaluated as RT eager could exhibit previously unseen non-termination, e.g. infinite recursion and exceptions.

@augustss Rash judgments w/o experience is annoying. Two decades of programming, and copious reading is all I can humbly offer at this moment. This is imperfect and warrants my caution. I appreciate factual correction.

My point is that with eager, debugging the changes in the program's state machine at any function step, will be bounded to the function hierarchy inside the body of the function, so the programmer can correlate changes in the state machine to what the function is expected to do.

Whereas, with lazy any function may backtrack into functions that were in the argument hierarchy of the current function, and inside functions called an indeterminant time prior. Afaics, lazy debugging should be roughly analogous to debugging random event callbacks, and reverse engineering the state machine in a blackbox event generation module.

As I understand from Filinksi's thesis, eager and lazy are categorical duals in terms of the induction and coinductive values in the program. Eager doesn't have products (e.g. conjunctive logic, "and") and lazy doesn't have coproducts (e.g. disjunctive, "or"). So this means that lazy imposes imperative control logic incorrectness from the outside-in, because coinductive types are built from observations and their structure (coalgebra) is not known until the finality when the program ends. Whereas, eager's incorrectness is from the inside-out, because inductive types have a a priori known structure (algebra) built from an initiality. Afaics, this explains why debugging eager has a known constructive structure and debugging lazy is analogous to guessing the structure of a blackbox event callback generation module.

My main goal is to factually categorize the tradeoffs. I am open to finding the advantages of lazy. I wrote a lot more about this at my site. I might be wrong, and that is why I am here to read, share, and learn. Thanks very much.

Could you tell me why lazy (with optional eager) is better for you than referentially transparent (RT) eager (with optional lazy), other than the speed of conjunctive (products) functional composition? Your lazy binding and lazy functions points should be doable with terse lazy syntax at the let or function site only, in a well designed eager language. Infinite lazy types can be done with the optional lazy in an eager language with such optional lazy syntax. Those seem to be superficial issues with solutions, unlike the debugging indeterminism, which is fundamental and unavoidable.

I wish this post could be shorter and still convey all my points.

Idea: perhaps deforestation could someday automate the decision on which eager code paths should be evaluated lazily.

This would perhaps make our debate mute, and also perhaps provide the degrees-of-freedom to optimize the parallel and sequential execution time trade-off at runtime. I suppose this has been looked at before. I don't know if this is not possible, because I haven't studied deeply enough the research on automated deforestation.

I understand the "cheap deforestation" algorithm in Haskell only works on lazy code paths, and only those with a certain "foldr/build" structure.

Perhaps an algorithm could flatten to their bodies, the function calls in eager referentially transparent (i.e. no side-effects) code paths but in lazy order until a cyclical structure is identified, then "close the knot" on that cyclical structure. Perhaps there is some theorem that such a structure is bounded (i.e. "safe") in the transformation of coproducts to products correctness, i.e. space and latency determinism.

Your any = or . map example flattens to a cycle (loop) on each element of the functor (e.g. list), which converts each element to a boolean, exits if the result is true, and always discards the converted element in every possible code path of the inputs. That discard proves (assuming RT and thus no side-effects in map) the lazy evaluation of the eager code has no new coproducts, and thus the determinism in space and latency is not transformed.

There are discarded products (map did not complete), thus in a non-total language, some non-termination effects may be lost in the transformation. Any way, I think all exceptions should be converted to types, thus the only non-termination effects remaining are infinite recursion.

There is the added complication that in some languages (those which enforce the separation-of-concerns, interface and implementation), the map in your example may be essentially a virtual method, i.e. selected at runtime for different functors, so the deforestation might need to be a runtime optimization.

@augustss Are you conflating 'strict' with 'not referentially transparent'? I think that is a common misperception because there aren't many strict languages which are also RT. So the experience programmers have with strict languages is non-compositional due to the lack of RT. Afaik, composition degrees-of-freedom is the same for both strict and lazy, given both are RT. The only trade-off is with runtime performance, and that trade-off has pluses and minuses on both sides of the dual. Please correct me if I am wrong on that.

@augustss thanks I now understand your concern. The difference in non-termination evaluation order with runtime exceptions is irrelevant to me, because I proposed that by using types we could eliminate all exceptions (or at least eliminate the catching them from the declarative RT portion of the program). I provide an example of how to eliminate divide-by-zero at copute.com (e.g. a NonZero type that can only be instantiated from case of Signed, thus forcing the check at compile-time before calling the function that has division). A runtime exception means the program is in a random state, i.e. that the denotational semantics is (i.e. the types are) not fit to the actual semantics of the runtime. Exceptions are the antithesis of compositional regardless of the evaluation order.

In my opinion, a greater problem for extension with Haskell (and Scala, ML, etc) is afaics they allow conflation of interface and implementation. That is explained at my site. I have been rushing this (80+ hour weeks), so I may have errors.

Another problem for extension is Haskell doesn't have diamond multiple inheritance. Perhaps you've already mentioned that.

Modular Type Classes, Dreyer, Harper, Chakravarty.

@augustss agreed must insure termination for cases which are not bugs, but infinite recursion is a bug in strict, thus I excluded it for comparing compositionality. No need to go 100% total in strict to get the same compositionality as lazy. Perhaps Coq can insure against infinite recursion bug, but that is an orthogonal issue.

Diamond problem impacts compositionality, because it disables SPOT (single-point-of-truth). For example, Int can't be an instance of Ord and OrdDivisible, if OrdDivisible inherits from Ord. Thus functions that would normally compose on Ord, have to compose separate on Int and IntDivisible, thus do not compose.

@augustuss if 'map pred list' doesn't terminate, it is because list is infinite. But infinite lists break parallelism and require lazy evaluation, so I don't want them as part of a default evaluation strategy. Offering a lazy keyword (i.e. type annotation or lazy type) can enable expression of those infinite constructions, but in my mind, it should discouraged, except where clarity of expression is a priority over parallelism. Did I still miss any predominant use case?

Thus, I concur with what Existential Type replied. For example, pattern match guards for a function, is runtime function overloading (splitting the function into a function for each guard), thus the compile shouldn't evaluate the functions (guard cases) that are not called.

It appears to me (see my idea in a prior comment) that lazy is a ad-hoc runtime non-deterministic approximation of deforestation. We need better automatic deforestation aware algorithms for eager compilers, so the parallelism vs. sequential execution strategy is relegated to the backend and not the in the language.


Last edited by Shelby on Thu Aug 25, 2011 12:49 pm; edited 27 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Computer Assisted Learning is capital's attempt to enslave mankind

Post  Shelby Mon Aug 01, 2011 12:28 pm

I didn't write this, but I agree with it. That is not to say that I don't think some computer-assisted learning can't be useful, but rather that it must be assisted by real human teachers. This is related to the post I made some time ago, refuting the idea that computers could replace humans.

http://www.soc.napier.ac.uk/~cs66/course-notes/sml/cal.htm

CAL Rant

The user should have control at all times, you are not forced to go through the material in any particular order and you are expected to skip the dull bits and miss those exercises which are too easy for you. You decide. The author does not believe that CAL is a good way to learn. CAL is a cheap way to learn, the best way to learn is from an interactive, multi functional, intelligent, user friendly human being. The author does not understand how it is that we can no longer afford such luxuries as human teachers in a world that is teeming with under-employed talent. His main objection to CAL is that it brings us closer to "production line" learning. The production line is an invented concept, it was invented by capital in order to better exploit labour. The production line attempts to reduce each task in the manufacturing process to something so easy and mindless that anybody can do it, preferably anything. That way the value of the labour is reduced, the worker need not be trained and the capitalist can treat the worker as a replaceable component in a larger machine. It also ensures that the workers job is dull and joyless, the worker cannot be good at his or her job because the job has been designed to be so boring that it is not possible to do it badly or well, it can merely be done quickly or slowly. Production line thinking has given us much, but nothing worth the cost. We have cheap washing machines which are programmed to self destruct after five years; cars, clothes, shoes - all of our mass produced items have built in limited life spans - this is not an incidental property of the production line, it is an inevitable consequence.
The introduction of CAL is the attempt by capital to control the educators. By allowing robots to teach we devalue the teacher and make him or her into a replaceable component of the education machine. I do not see how such a dehumanizing experience can be regarded as "efficient", the real lesson learned by students is that students are not worth speaking to, that it is a waste of resources to have a person with them. The student learns that the way to succeed is to sit quietly in front of a VDU and get on with it. The interaction is a complete sham - you may go down different paths, but only those paths that I have already thought of, you can only ask those questions which I have decided to answer. You may not challenge me while "interacting". I want students to contradict, to question, to object, to challenge, to revolt, to tear down the old and replace with the new.

Do not sit quietly and work through this material like a battery student. Work with other people, talk to them, help each other out.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty OOP in Standard ML (SML)

Post  Shelby Sat Aug 06, 2011 3:53 am

My prior email needs followup clarification.

From section 2.1 of your paper[1], I assume a functor's 'sig' or 'struct' can recurse, e.g.

Code:
functor f( a : A ) sig
  type b
  val map : (f(a) -> b) -> f(a) -> f(b)
end

I not yet learned how 'map' can be made polymorphic on b, independently of the creation of a concrete instance of f.

With limited study, I infer that 'structure' is a concrete data type where all abstract 'type' and methods must be defined and implemented. That 'signature' is an abstract type to the extent that it has inner abstract 'type' and unimplemented method signature(s). And that 'functor' is a higher kind constructor (for abstract 'sig' or concrete 'struct'), i.e. type parametrization.

Thus roughly the following informal correspondence to Scala:

signature = trait
functor,,,(...) sig = trait,,,[...]
structure = mixin
functor,,,(...) struct = mixin,,,[...]
using ... in ,,, = class ,,, extends ...

So it appears Scala is more generalized than Haskell, and perhaps equivalently so to SML (this would need more study). Scala also has type bounds in the contravariant direction and variance annotations (i.e. these would be the functor parameters in SML), as well as a bottom type Nothing.

Fyi, Iry's popular chart needs correction. Haskell and ML are higher-kinded.

http://james-iry.blogspot.com/2010/05/types-la-chart.html

> ml has higher kinds only through the module system. the class you mention
> is the type of a functor in ml. so, yes, we have this capability, but in
> a different form. the rule of thumb is anything you can do with type
> classes in haskell is but an awkward special case of what you can do with
> modules in ml. my paper "modular type classes" makes this claim
> absolutely precise.

It appears my suggested separation of abstract interface and concrete implementation, would require that sig must not reference any structure nor functor struct, and functions must only reference signature or functor sig.

For Copute, one of the planned modifications of Scala, is trait and functions can only reference traits.

Of course it is given that everything can reference the function constructor (->).

One example of the benefit of such abstraction, is for example if we reference the red, green, blue values of a color type, we are calling interface methods, not matching constructor values. Thus subtypes of color may implement red,green,blue orthogonal to their constructor values.

I am fairly sleepy, I hope this was coherent.

[1] Modular Type Classes, Dreyer, Harper, Chakravarty, section 2.1 Classes are signatures, instances are modules

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Copute for dummies

Post  Shelby Sun Aug 07, 2011 10:46 pm

My secret weapon against the fascism that is descending on our globe and humorously explained.

It flys exponentially faster than a speeding bullet, they can't hear it, they can't see it, they can't even understand it.

If you had read any of this copute.com site before, you can read it again, because I edited and improved nearly every section, even as recently as today.

I also suggested you watch to this talk by the creator of Scala on the big change to parallelism that is occurring because the computer clock speed can't increase any more, only the # of cores.

Folks here is the result of several weeks of research and writing, and I have now an layman's introduction of Copute and the explanation of why and how it could change the world radically:

http://copute.com/

You may read the following sections, which are non-technical. Don't read the sub-sections unless they are explicitly listed below (because you won't understand them).

Copute (opening section)
| Higher-Level Paradigms Chart
Achieving Reed's Law
What Would I Gain?
| Improved Open Source Economic Model.
| | Open Source Lowers Cost, Increases Value
| | Project-level of Code Reuse Limits Sharing, Cost Savings, and Value
| | Module-level of Reuse Will Transition the Reputation to an Exchange Economy
| | Copyright Codifies the Economic Model
Skeptical?
| Purity
| | Benefits
| | Real World Programs
| Higher-Level
| | Low-Level Automation
| | Degrees-of-Freedom
| | | Physics of Work
| | | | Fitness
| | | | Efficiency of Work
| | | | Knowledge
| State-of-the-Art Languages
| | Haskell 98
| | Scala
| | ML Languages

If you want some independent affirmation of my ideas, see the comments near this end of this expert programmers blog page:

http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#4642367335333855323

Feedback is welcome.

P.S. If you want to see what I mean about eliminating exceptions in Copute, load http://www.simplyscala.com/, then enter following line of code and click the "Evaluate" button and watch the red errors fly all over (then pat yourself on the back, you wrote your first program):

List().tail

=======================
=========ADD===========
=======================


http://Copute.com

If you are using FF or Chrome, make sure you use the horizontal scroll bar to view the content to the right.

I would like to have the columns scroll vertically, with page breaks for the height of the screen, CSS3 multicol does not provide that option. I may experiment with trying to force paged media on a screen browser later, in order to get vertical instead horizontal scrolling.

The document is about how to help people cooperate faster on the internet, using a new computer language, that will lead to much more freedom.

TIP: Do you see that western governments are trying to overthrow the governments of the countries that have oil in the Middle East and northern Africa, and they are giving money to the radical Muslim Brotherhood, because they want to cause fighting and disorganization so the oil will be shut off for some years. The reason they want to do this, is they want to make the prices of everything go very high to bankrupt all the people in the world, so that they can make us their slaves. Of course, they say this is for democracy and the freedom of the people in those countries. How stupid people are to believe them.

I suggest you read the section "Physics of Work" on my site. Then you will understand why a centralized government is always evil and going towards failure. It is due to the laws of physics. Maybe you can understand if you read that section very carefully.

http://Copute.com

Skeptical?
| Higher-Level
| | Degrees-of-Freedom
| | | Physics of Work
| | | | Fitness
| | | | Efficiency of Work
| | | | Knowledge

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Social cloud is how we will filter information more efficiently

Post  Shelby Mon Aug 22, 2011 6:06 am

You all know about filtering of information, because you come to this forum to read a lot of the information from the web condensed and filtered for you by people you think have a common interest and outlook (or at least related outlook worth reading).

Read this article for a laymen's explanation:

http://esr.ibiblio.org/?p=3614&cpage=1#comment-318814

Shelby wrote:
You've identified my model for the simultaneous optimization of CTR (ratio of clicks to views of ads) and CR (conversion ratio of visitors to sales).

This should once sufficiently ubiquitous, make it impossible for any entity (e.g. a blog owner) to censor or centralize the control over comments, because the comments (CR refinement) will continue into the social cloud.

Does this present a conflict of interest for Google, because if the social cloud is truly open, then how does it not eventually reduce the leverage to charge rents on advertising matching? Or does it increase the volume of ad spending (higher ROI for advertisers) and reward the business model with economy-of-scale for the cloud servers? Why wouldn't something like Amazon's commodity server model win over Google's smart servers model?

As information becomes more tailored to smaller groups, then P2P storage becomes less viable (unless we want to pay a huge bandwidth premium to store data on peers that don't use it), because there isn't enough redundancy of storage in your peer group, so it appears server farms are not going away. The bifurcating tree network topology is thermodynamically more efficient than the fully connected mesh (e.g. the brain is not a fully connected mesh, each residential water pipe doesn't connect individually to the main pumping station, etc).

P.S. I have become less concerned about the vulnerability (especially to fascist govt) of centralized storage, because as the refinement of data becomes more free market (individualized), it becomes more complex when viewed as a whole, thus the attackers will find it more challenging to impact parts without impacting themselves. Thus it appears to me the commodity server model wins, i.e. less intelligence in the server farm, and more in the virtual cloud.

Here were my prior posts on this matter:

Tension between CR and CTR for advertising!
Simultaneous maximization of both CR and CTR is Google's Achilles heel

=============================

http://esr.ibiblio.org/?p=3614&cpage=2#comment-319354

Isn't it very simple? Most of the comments have agreed, and I concur. Nature made whole foods slower to digest (e.g. fiber, not finely ground to infinite surface area, etc) so we don't spike our insulin, plaque our liver, etc.. Whole foods satisfy appetite because they contain the complex micro-nutrients our body craves (e.g. amino acids, etc), which processed foods do not. Complex carbs, sugars, fats should not be limited, because our body needs and will regulate these (even the ratios between food groups) signaled by our cravings. To change body form, increase exercise, don't limit diet. Processed carbs, sugars, and fats should be entirely avoided, as these screw up the feedback mechanism and probably cause the confusion referred to. The appetite feedback loop is out-of-whack when not consuming whole foods, and probably also when not exercising sufficiently. Except for outlier adverse genetics, no one eating only whole foods and exercising, needs to count calories and grams.

@Tom if the government wasn't taxing us for the healthcare of those who smoke, and those who breathe their smoke in public venues, then we would be impacted less by their smoking. Probably there would be less smokers if government wasn't subsidizing their health care and lower job performance, so then the nuisance for us non-smokers would also be mitigated.

@gottlob Isn't social media a revolution in information targeting, i.e. fitness, which is orthogonal to the "quality of demand" to which you refer?

Tangentially, I also think that knowledge will soon become fungible money (and I don't mean anything like BitCoin, of which I am highly critical), in the form of compositional programming modules, which will change the open source model from esr's gift to an exchange economy. Remember from esr's recent blog, my comment was that software engineering is unique in that it is used by all the others, and it is never static, and thus is a reasonable proxy for (fundamental of) broad based knowledge. I suggest a broader theory, that the industrial age is dying, which is why we see the potential for billions unemployed. But the software age is coming up fast to the rescue. Open source is a key step, but I don't think the gift economy contains enough relative market value information to scale it to billions of jobs.

@Ken doesn't genetics matter at the extremes of desired outcome or adverse genetics, where whole foods and reasonable level of physical activity is sufficient for most, e.g. some fat is desirable and necessary normally?

===================================

http://esr.ibiblio.org/?p=3634&cpage=1#comment-319373

Perhaps "exeunt" because by implication, it is the company exited the stage.

I am saddened to read that Jobs was back in hospital on June 29, that he won't be around to contribute and observe the ultimate outcome of the software age and open source. Contrasted with that I want to eliminate his captive market for collecting 30% rents on software development and dictating morality of apps. The competitive OPEN (not captive) balance is for investment capital is to extract about 10% of the increase in capital.

Of course Apple will lose market share in not too distant future, as will every capital order that exists due to more than 10% capture of the increase resulting from its investment. Btw, this is why a gold standard can never work, because society can't progress when new innovation and labor (i.e. capital) is enslaved to pre-existing capital (prior innovation and labor).

In my mind, the more interesting question is what happens to Google's ad renting captive market (I suspect taking way more than 10% of the increase in many cases), when the revolution of information targeting fitness of OPEN (i.e. not captive) social media has the same explosion into entropy that Android did to Apple. The waterfall exponential shift won't take more than 2 years once it begins. I suppose Google will be forced to lower their take (as Android will force Apple), thus exponentially increasing the size of the ad market is critical. So the motivation for Android is clear, but ironically it may accelerate Google's transformation from an ad company to a software company. But as a software company, I expect Google will be much more valuable as a collection of individuals or smaller groups, thus there will be an exodus of talent. I don't yet know how many years forward I am looking, probably not even a decade.


Last edited by Shelby on Thu Aug 25, 2011 10:00 am; edited 2 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty How to make knowledge fungible, i.e. a currency

Post  Shelby Tue Aug 23, 2011 4:41 pm

UPDATE: a new post on Entropic Efficiency summarizes perhaps better than this post does.

Due to technical advances in code reuse, Copute, seems to have the potential to become the next mainstream programming language, regardless of whether I can successfully design a new economic model for open source, i.e. exchange (capitalism) instead of gift (socialism).

Yet I am very curious about how knowledge could become a fungible currency.

One of the key insights was that software development is the only engineering discipline that is used by all the others. Thus software development is a fundamental expression of knowledge. So if there was some way to make software development fungible, then by proxy, knowledge would be also.

The idea had been that if there was some way people could create reusable software modules, then these could be exchanged as building blocks of larger software modules and applications. In theory, Copute is designed to make the modules reusable, so the remaining two challenges for a capitalistic exchange was:

1. how to define a fungible unit of the knowledge currency
2. how to convert this unit to general economy

The unit of a knowledge currency would be, due the proposed proxy for knowledge, a unit of programming code. But the challenge was how to define such a unit, that reflects a free market value and yet remains fungible. First of all, there is no known standard measure of code value, e.g. lines of code (LOC) appears to not be well correlated with code complexity nor market value. Note that every supply/demand curve has price on one of its axis, and quantity on the other axis. Thus if there were competing programming modules and their price was equivalent, then the relative quantity of demand would determine which had more market value. Note that price is useful when it contains information about relative market value, but if the software development process needs to incorporate many modules, and modules which use other modules, price contains no relative information of market value because the developer can't possibly factor all of these economic relationships and modules wouldn't be able to reliably reuse other modules if the reused module prices could change. So the first key conclusion is that the unit of the programming code price, must be standardized based upon some metric which is reasonably correlated to effort, and then let relative market value be measured by relative quantity of demand for competing modules.

When designing the Copute grammar, I realized that everything is a function with one input parameter. So thus a non-ambiguous measure of complexity, is the number of such single-parameter functions in a module. And thus the single-parameter function is the basic unit of programming code, and thus by proxy the unit of knowledge. Thus the relative complexity (and thus relative price) of code can be automatically determined at compile-time.

But how to convert this unit to a price in the general economy, and such that incorrect automated relative price of modules would not skew the relative demand, e.g. code that could fetch a much higher price and still gain the same quantity of relative demand? The solution is of course competition. If a module owner can get a better (quantity x price) in another marketplace, they will. Additionally, the proposed marketplace will be non-exclusive, because the conversion of this knowledge unit to price in the general economy will not be based on the relative choice of modules. In other words, the consumer of these units will not pay per unit, but per a metric of the total value added.

Notice that most programming languages and libraries are given away for free. This is the gift economy, i.e. I scratch your back if you scratch mine and let's not keep very specific accounting of it. Thus the improvement should have the same level of efficiency, while capturing more market value information. So the efficiency is basically that it makes no sense to pay out a large percentage of a software development's cost (or potential profit) to reuse existing work, because otherwise new innovation and labor becomes a slave to capital (old innovation and labor). Thus the marketplace of modules offered for reuse, should only extract for example the greater of 1% of the gross sales, or 10% of the development cost, for the software project reusing any quantity of the modules offered. This 10% comes from the Bible's wisdom to take only 10% of the increase. The 1% of gross assumes that a company should pay at least 10% of gross sales in (programming, i.e. knowledge) development costs, and our model asks for 10% of the 10%. There should be some threshold, so the vast majority of individual developers never pay.

So the offer to the software developer is, you can reuse as many modules as you want from our marketplace, and you pay us the greater of 1% of your gross sales, or 10% of your development cost, of the software product developed. This income is then distributed to the module owners, using the relative market value of (single-parameter function units x quantity of module reuse).

There is one remaining problem-- bug fixes and module improvements by 3rd parties. How do we motivate 3rd party bug fixes and module improvements, without penalizing the module owner? The module owner wants bug fixes and improvements, but doesn't want to pay more for them than they are worth. It seems the best is to let the module owner set a bounty on specific items in the bug and feature request database, and that bounty can be a percentage of the module's income.

This seems very workable and sound. I am hopeful that this makes knowledge into a fungible currency, with radical implications for the world.

=================
Additional Explanation
=================

Read the entire thread that is and precedes the following linked post:

https://goldwetrust.forumotion.com/t44p75-what-is-money#4535

==============================
http://esr.ibiblio.org/?p=3634&cpage=4#comment-320342

@Jeff Read
That research correlated SLOC with fault-proneness complexity. It is possible (I expect likely) that given the same complexity of application, bloated code may have more faults than dense code.

A metric that correlates with application complexity (as a proxy of relative price in, price x market quantity = value), is a requirement in the exchange model for open source that I am proposing. I will probably investigate a metric of lambda terms count and denotational semantics (i.e. userland type system) complexity.

The point is that a metric for fault-proneness complexity may not be correlated with application complexity, i.e. effort and difficulty of the problem solved.

================
UPDATE: It is very important the mininum level of royalty-free reuse of code modules have at least the following 3 qualities:


  1. Legal stipulation that the royalty-free reuse of code modules (from copute.com's repository only) must apply at least minimally to the ability of an individual to support himself with a small company. So the limits should be minimally roughly the development cost where 1 full-time programmer was employed (for the royalty on the development cost option), or the size of the market necessary to support the livelihood of a individual and his family. And the limit should be the greater of the two.
  2. Legal stipulation in the software license for all the current AND FUTURE modules, that the limits in #1 above, may never be decreased (although they could be increased).
  3. Legal stipulation that in the event that during any duration for which a court in any jurisdiction removes the force of these terms, that the license of the software module grants the people (but not corporations) in that jurisdiction the royalty-free license to all the modules on copute.com's repository, without any limitation. Then the courts will have every software module owner up-in-arms against them.


The reason is because if for example, copute.com, gained a natural monopoly due to being the first-mover and gaining inertia due to greatest economy-of-scale of modules, then it would be possible for a passive capital to gain control over copute.com and change the terms of the company to extract unreasonable (parasitic) rents from the entire community, thus we are just back to a fiat like system, where the capitalists control the knowledge workers' capital. So since those paying market-priced royalties in Copute, will be the larger corporations, if the govt decides to heavily tax it, they will tax their own passive capitalists. See the design of Copute is to tax the frictional transaction costs in the Theory of the Firm that gives rise to the corporation. Thus the design of Copute's economic model is for its value to decrease in legal tender terms as time goes on, while its value in knowledge terms increases. This is a very unique economic design, that I don't think has existed in any business model I am aware of.

Having these assurances will encourage more contribution to copute.com's repository. I would encourage competing repositories, and for them to adopt a similar license strategy.


Last edited by Shelby on Sun Sep 11, 2011 4:38 pm; edited 7 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Creator of the browser, says software will now take over the world

Post  Shelby Tue Aug 23, 2011 8:00 pm

Read my prior post about software being in everything and thus is a proxy for knowledge, then read this:

http://online.wsj.com/article/SB10001424053111903480904576512250915629460.html (make sure you watch the video!)

Here is an excerpt:

My own theory is that we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy.

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

Why is this happening now?

Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.

Over two billion people now use the broadband Internet, up from perhaps 50 million a decade ago, when I was at Netscape, the company I co-founded. In the next 10 years, I expect at least five billion people worldwide to own smartphones, giving every individual with such a phone instant access to the full power of the Internet, every moment of every day.

On the back end, software programming tools and Internet-based services make it easy to launch new global software-powered start-ups in many industries—without the need to invest in new infrastructure and train new employees. In 2000, when my partner Ben Horowitz was CEO of the first cloud computing company, Loudcloud, the cost of a customer running a basic Internet application was approximately $150,000 a month. Running that same application today in Amazon's cloud costs about $1,500 a month.

With lower start-up costs and a vastly expanded market for online services, the result is a global economy that for the first time will be fully digitally wired—the dream of every cyber-visionary of the early 1990s, finally delivered, a full generation later.

Perhaps the single most dramatic example of this phenomenon of software eating a traditional business is the suicide of Borders and corresponding rise of Amazon. In 2001, Borders agreed to hand over its online business to Amazon under the theory that online book sales were non-strategic and unimportant.

Oops.

Today, the world's largest bookseller, Amazon, is a software company—its core capability is its amazing software engine for selling virtually everything online, no retail stores necessary. On top of that, while Borders was thrashing in the throes of impending bankruptcy, Amazon rearranged its web site to promote its Kindle digital books over physical books for the first time. Now even the books themselves are software.

Today's largest video service by number of subscribers is a software company: Netflix. How Netflix eviscerated Blockbuster is an old story, but now other traditional entertainment providers are facing the same threat. Comcast, Time Warner and others are responding by transforming themselves into software companies with efforts such as TV Everywhere, which liberates content from the physical cable and connects it to smartphones and tablets.

Today's dominant music companies are software companies, too: Apple's iTunes, Spotify and Pandora. Traditional record labels increasingly exist only to provide those software companies with content. Industry revenue from digital channels totaled $4.6 billion in 2010, growing to 29% of total revenue from 2% in 2004.

Today's fastest growing entertainment companies are videogame makers—again, software...

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty I'm becoming skeptical about the claim that pure functional is generally log n slower

Post  Shelby Thu Aug 25, 2011 5:54 pm

http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/7194832#7194832

Shelby Moore III wrote:
I am becoming skeptical about [the claim][1] that pure functional is generally O(log n). See also Edward Kmett's answer which makes that claim. Although that may apply to random mutable array access in the theoretical sense, but random mutable array access is probably not what most any algorithm requires, when it is properly studied for repeatable structure, i.e. not random. I think Edward Kmett refers to this when he writes, "exploit locality of updates".

I am thinking O(1) is theoretically possible in a pure functional version of the n-queens algorithm, by adding an undo method for the DiffArray, which requests a look back in differences to remove duplicates and avoid replaying them.

If I am correct in my understanding of the way the backtracking n-queens algorithm operates, then the slowdown caused by the DiffArray is because the unnecessary differences are being retained.

In the abstract, a "DiffArray" (not necessarily Haskell's) has (or could have) a set element method which returns a new copy of the array and stores a difference record with the original copy, including a pointer to the new changed copy. When the original copy needs to access an element, then this list of differences has to be replayed in reverse to undo the changes on a copy of the current copy. Note there is even the overhead that this single-linked list has to be walked to the end, before it can be replayed.

Imagine instead these were stored as a double-linked list, and there was an undo operation as follows.

From an abstract conceptual level, what the backtracking n-queens algorithm does is recursively operate on some arrays of booleans, moving the queen's position incrementally forward in those arrays on each recursive level. See [this animation][2].

Working this out in my head only, I visualize that the reason DiffArray is so slow, is because when the queen is moved from one position to another, then the boolean flag for the original position is set back to false and the new position is set to true, and these differences are recorded, yet they are unnecessary because when replayed in reverse, the array ends up with the same values it has before the replay began. Thus instead of using a set operation to set back to false, what is needed is an undo method call, optionally with an input parameter telling DiffArray what "undo to" value to search for in the aforementioned double-linked list of differences. If that "undo to" value is found in a difference record in the double-linked list, there are no conflicting intermediate changes on that same array element found when walking back in the list search, and the current value equals the "undo from" value in that difference record, then the record can be removed and that old copy can be re-pointed to the next record in the double-linked list.

What this accomplishes is to remove the unnecessary copying of the entire array on backtracking. There is still some extra overhead as compared to the imperative version of the algorithm, for adding and undoing the add of difference records, but this can be nearer to constant time, i.e. O(1).

If I correctly understand the n-queen algorithm, the lookback for the undo operation is only one, so there is no walk. Thus it isn't even necessary to store the difference of the set element when moving the queen position, since it will be undone before the old copy will be accessed. We just need a way to express this type safely, which is easy enough to do, but I will leave it as an exercise for the reader, as this post is too long already.


[1]: https://goldwetrust.forumotion.com/t112p180-computers#4437
[2]: http://en.wikipedia.org/w/index.php?title=Eight_queens_puzzle&oldid=444294337#An_animated_version_of_the_recursive_solution

The backtracking n-queen algorithm is a recursive function that takes 3 parameters, an array for each diagonals direction (\ and /), and a row count. It iterates over columns on that row, moving the queen position on that row in the arrays, and recursing itself on each column position with cur_row + 1. So seems to me the movement of the queen position in the arrays is undoable as I have described in my answer. Does seem too easy doesn't it. So someone please tell me why, or I will find out when I write out an implementation in code.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Industrial Age is being replaced by the Software (Knowledge) Age

Post  Shelby Fri Aug 26, 2011 6:16 am

http://esr.ibiblio.org/?p=3634&cpage=2#comment-319568

I think perhaps my prior comment was not coherent enough.

With smartphone hardware costs trending asymptotically towards the cost of 100 grams of sand, plastic, and mostly non-precious metals, the future profit margins are in software. I previously wrote that the industrial age is dying, to be displaced by the software (knowledge) age. Automation is increasing, costs are declining towards material inputs, thus aggregate profits (and percentage share of the economy) are declining for manufacturing, even if profit margins were maintained (which they are not because 6 billion agarians are suddenly competing to join the industrial age in the developing world). There is now even a $1200 3D printer. Wow.

Assuming the smartphone is becoming a general purpose personal computer, the software paradigm that provides unbounded degrees-of-freedom, can in theory gain at an exponential rate more use cases over a bounded platform.

Even if Apple competes on the low-price end, I predict their waterfall implosion will be driven by some aspect of "web3.0" that diminishes their captive high rents on content and cloud services, because this will cut off their ability to subsidize software control (and the low-end hardware), i.e. the subsidy that they are not leveraging the community capital via unbounded open source. Such a paradigm shift may also threaten Google's captive high rents on ad services, but Google leverages open source to minimize the subsidy. I envision that Google will lose control over the platform once the high rate of market growth slows and vendors compete for a static market size pie. That will be a desirable outcome at that stage of maturity.

The high-level conceptual battle right now is not between hardware nor software platforms, features, etc.. It is a battle between unbounded and bounded degrees-of-freedom. The future belongs to freedom and inability to collect high rents by capturing markets in lower degrees-of-freedom. So I would bet against all large corporations (eventually), and bet heavily on small, nimble software companies.

@Winter
I agree that the future profit margins belong to the owners of human knowledge (distinguish from mindless repetitive labor that adds no knowledge), i.e. the individual programmers. Services are trending asymptotically (i.e. but never reaching) full automation, meaning that programming will move continually more high-level forever. Software is never static.

Thus, services is software. Knowledge is software.

I have written down (see the What Would I Gain? -> Improved Open Source Economic Model, and scroll horizontally) what I think are the key theoretical concepts required to break down the compositional barriers (lack of degrees-of-freedom) so the individual can take ownership of his capital. I have emailed with Robert Harper about this. Afaics, once this is in place, then large companies will not be able to take large rent shares. We are on the cusp of age of the end of the large corporation and the rise of the individual programmers, hopefully millions or billions of them. Else massive unemployment.

@Winter
Agreed. The bits are never static. They continually require new knowledge to further refine and adapt them. It is not the bits that are valuable, but the knowledge of the bits, and how to fix bugs, improve the bits, interopt with new bits, and compose bits. And this process never stops, because the trend to maximum entropy (possibilities) never ceases (2nd Law of Thermo). What makes software unique from (fundamental of) all other engineering disciplines, is that software is the encoding in bits, the knowledge of the other disciplines– a continual process.

But actually it is not an encoding in bits. It is an encoding in continually higher-level denotational semantics. The key epiphany is how we improve that process, and the tipping point where it impacts the aggregation granularity of capital formation in the economic model of that process. If you understand language design and the references I cited in them, the links I provided might be interesting (or the start of debate).

http://esr.ibiblio.org/?p=3634&cpage=2#comment-319625

@nigel Larger software teams accomplish less are due to the Mythical Man Month. My conjecture is that individual developers will become the large team sans the compositional gridlock of the MMM, with the individual contributions composing organically in a free market. I realize it is has been only a dream for a long-time. On the technical side, there is at least one critical compositional error afaik all these languages have made, include the new ones you mentioned. They conflated compile-time interface and implementation. The unityped (dynamic) languages, e.g. Python, Ruby, have no compile-time type information.

If we define complexity to be the loss of degrees-of-freedom, I disagree that the complexity rises. Each higher-level denotational semantics unifies complexity and increases the degrees-of-freedom. For example, category theory has enabled us to lift all morphism functions, i.e. Type1 -> Type2, to functions on all functors, i.e. Functor[Type1] -> Functor[Type2]. So we don't have to write a separate function for each functor, e.g. List[Int], List[Cats], HashTable[String]. Perhaps complexity has been rising because of the languages we use. We haven't had a new mainstream typed language since Java, which is arguably C++ with garbage collection. Another example, before assembly language compilers, we had the complexity of manually adjusting the cascade of memory offsets in the entire program every time we changed a line of code.

Indeed it can consume a decade or more for a language to gain widespread adoption, but that isn't always the case, e.g. PHP3 launched in 1997 and was widespread by 1999. Afaik, a JVM language such as Scala can compile programs for Android.

@Winter, agreed. It is my hope that someday we won't need to pay for all the complexity bloat and MMM losses. We will get closer to paying the marginal cost of creating knowledge, instead of the marginal costs of captive market strategies, code refactoring, "repeating myself", etc.. I bet a lot boils down the fundamentals of the language we use to express our knowledge. Should that be surprising?

@phil captive markets grow as far as their subsidy can sustain them, then they implode, because of the exponential quality of entropy and the Second Law of Thermodynamics (or otherwise stated in Coase's Theorem). Apple's subsidy might be stable for as long as their content and cloud revenues are not threatened, perhaps they can even give the phones away for free or negative cost. That is why I think the big threat to Apple will come from open web apps, not from the Android hardware directly. The Android hardware is creating the huge market for open apps. I guess many are counting on HTML5, but the problem its design is a static and designed by committee, thus not benefiting from network effects and real-time free market adaptation. I would like something faster moving for "Web3.0" to attack Apple's captive market.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty More on Copute's exchange model for open source; and Industrial Age decline

Post  Shelby Sun Aug 28, 2011 7:01 am

http://esr.ibiblio.org/?p=3634&cpage=3#comment-319827

@nigel
I agree with esr's Inverse Commons thesis. Apparently there is amble evidence of the success of open source. The involvement of corporations is anticipated by that model, thus if anything provides more evidence of the model's success. The "gift or reputation" component of the model is in harmony with the strategic benefit to corporations. I also concur with esr's stated reasons for doubting how an exchange economy could work for open source.

However as a refinement, I also think the lack of an exchange economy in that model means that mostly only entities who need a commons software improvement are motivated to participate. I know this is true for myself. To broaden the impact of open source, and motivate people to contribute for the income directly correlated to the market value of their contribution, I have in theory devised a way to enable the open source exchange economy. Notably it doesn't require copyright policing, nor anyone to assign a monetary value to a module, nor micro-payments, nor must it be one centralized marketplace. It is all annealed by the market. Relative value is calculated by relative module use, and relative price in an indirect way. Nominal price is never calculated. And for the vast majority of users and certainly all non-profit ones, it remains a "gift or reputation" economy.

It is my hope that this can drive millions or billions of people to become programmers. I might be wrong about this though, and I remain committed to the Inverse Commons as well. Please note that my theory is that adding more fine-grained relative value information to a market, can make it more efficient (assuming there are no bottlenecks), because there would be more stored information and degrees-of-freedom annealed by the free market. Relative price is information. So my model is not so much about exchange of fiat currency, but about measuring this information. My "pie in the sky" dream is the knowledge, with software modules as the proxy, becomes fungible money and thus a currency. Note that gaming currencies became so widespread that China outlawed their convertibility to yuan.

@The Monster
The opposite actually. Aggregate debt is growing nominally by the aggregate interest rate, while it is serviced. Aggregate debt can shrink during implosive defaults, but not if the defaults are transferred to government debt, as is the case in western world today.

I understand your argument that real debt isn't growing if production increases faster than debt (a/k/a positive marginal-utility-of-debt), but you discount the damage due to supply and demand distortion. In fact, the western world is now in negative marginal-utility-of-debt, i.e. the more public debt we add, the faster real GDP shrinks.

The explained by the disadvantage of a guaranteed rate of return compared with equity, because the investor has less incentive to make sure the money is invested wisely by the borrower, i.e. passive investment. No amount of regulation can make it so. The growth of passively invested debt causes mutually escalating debt-induced supply and demand. When that implodes (due to the distortion of the information in supply and demand the debt caused), the capitalist lender demands the government enforce the socialization of his losses. Thus the fixed interest rate (usury) model is an enslavement of labor and innovation to passive stored capital, and is the cause of the boom and bust cycle. Equity is in theory a far superior model. But the problem with equity is the attempt to guarantee rate of return via captive markets (a/k/a monopolies or oligopolies), i.e. again stored capital wants to be passive. The basic problem is stored capital, i.e. the concept that what we did in the past has a value independent of our continued involvement. I trying to end the value of passive stored capital with my work on an exchange open source economy, and I think it is declining anyway with the decline of industrial (capital intensive) age.

@nigel
I think "individual devs will supplant large teams funded by large corporations", and it will be because the marginal cost of software is not free. In The Magic Cauldron, my understanding is that esr argues that software is not free and has costs that in most cases could never be recovered by selling it. The Use-Value and Indirect Sale-Value economic models presented in The Magic Cauldron, seem to acknowledge that open source will be funded by corporations. I think there can be an exchange model which can enable individual devs to function more autonomously, but it is achieved, it will be because software is not free and has use value cases that can be worked on independently in compositional modules.

@The Monster
Evidence says that the total (a/k/a aggregate) debt of the fiat system increases at the compounded prevalent interest rate. For my linked graphs, note that M3 is debt in a fiat system, because all fiat money is borrowed into existence in a fractional reserve financial system.

Apparently the reason for this debt spiral is because even while some pay down debt, that debt elevated demand, which escalates supply and debt, which then escalates demand, which then escalates supply and debt, recursively until the resources feeding the system become too costly, then it implodes. Also there are at least two other reasons. When money is borrowed into existence from a bank's 1-10% reverse ratio and is deposited, it increases the bank's reserves, thus increasing the amount that can be loaned into existence. Perhaps more importantly, the money to pay off the debt, has to be created (since the entire economy is run on debt money), and thus must grow at the compounded interest rate on the aggregate in the economy, as the evidence shows. So raising interest rates to slow down the economy, actually increases the rate at which the debt grows.

I did not criticize storage of sufficient inventories. Physical inventories are becoming a smaller component of the global economy, and I bet at an exponential rate. I criticized passive capital, meaning where our past effort extracts captive market rents on the future efforts of others, simply for doing nothing but carelessly (i.e. passively with guaranteed return) loaning digits which represent that past effort (or guaranteeing ROI with monopolies, collusion with the government, etc). Contrast this against say offering some product, and the ongoing effort to support that product, i.e. active investment in any venture where your past experience is being applied most efficiently towards active current production, which would include equity investments based on your active expert knowledge of what the market needs most. What you wrote about Ric, does not disagree with my thesis. For example, as I understand Esr's thesis about use versus sale value in The Magic Cauldron, it says open source program code can't be rented, unless there is ongoing value added, i.e. the value is in the service not the static bits. He mentions the bargain bin in software stores for unsupported software. Machine tools are critically important, but not the raw material inputs and not so much so the machine itself, rather the knowledge of the design, operation, and future adaptation and improvement.

@nigel:
Btw, I worked briefly on Corel Painter in mid-1990s, when it was Fractal Design Painter, and Steve Guttman came to us from VP of Marketing of Adobe Photoshop (he is now VP at Microsoft and Mark Zimmer is now making patents for Apple). I escaped from that mentality and software model, under which my creative abilities were highly under-utilized because we had to give way to the founder heros (and I took advantage of it too). I appreciated the learning experience and opportunity to work with people with 160+ IQs (Tom Hedges purportedly could memorize a whole page of a phone book by glancing at it), but I also see the waste (captive enslavement by those who need a salary) of resources in that non-optimal allocation model. I have not worked for a company since.

With a compositional model, I assert proprietary software is doomed to the margins. Open source increases cooperation. No one can make the cost of software zero. Open source is a liberty, not a free cost, model. My understanding is that Richard Stallman and the FSF are against OSI's replacement of the word "free" with "freedom". My understanding is because FSF requires that the license must disallow charging for derivative software so that the freedom-of-use is not subverted by forking, but perhaps this is in tension with the reality of the nonzero cost of software and the types of business models that derivatives might want to explore. I may not fully understand the rift, or maybe there is no significant rift, yet apparently there is some misunderstanding outside the community of what is "free" in open source.

If we have technology such that software modules are compositional without refactoring, I think this tension in derivative software will diminish, because then any derivative module (which is a composition of modules) is a completely orthogonal code, and thus may have a separate license from the modules it reuses without impacting the freedom-of-use of the reused component modules, because the reused modules will not have been forked nor modified. Thus I propose that with a compositional computer language, individual modules can be created orthogonally by individual devs and small teams, and thus the importance of corporations will fade.

@Jeff Read and @uma:
In my "pie in sky" vision, the corporations can still exist to slap on compositional glitter to core modules created by compositional open source. And they can try to sell it to people, but since the cost of producing such compositions will decline so much (because the core modules have been created by others and their cost amortized over greater reuses), then the opportunities to create captive markets will fade also. In a very abstract theoretical sense, this is about degrees-of-freedom, fitness, and resonance (in a maximally generalized, abstract definition of those terms).

The cost of creating the core modules is not zero, and so I envision an exchange economy to amortize the costs of software in such a compositional open source model. But first we need the compositional technology, which is sort of a Holy Grail, so skepticism is expected. I am skeptical and thus curious to build it to find out if it works. However, if there is someone who can save me time by pointing out why it can't work, that would be much appreciated. Which is perhaps why I mentioned to the great thinkers here. Also to learn how to become a positive member of this community.

@The Monster:
I don't see how the conclusion would be different, whether the growth of total debt causes the interest rate to be correlated, or vice versa. I.e. Transposing cause and effect makes no difference. Even if the correlated total debt and interest rate are not causing the other, the conclusion remains that total debt grows at the prevalent interest rate compounded. And I don't think you are refuting that an increase in debt, increases demand and supply (of some items) in the economy. Recently it was a housing bubble. Loans pull demand forward, and starve the future of demand.

@The Thinking Man:
The problem with + for string concatenation is only when there is automatic (implicit) conversion of string to other types which use the same operator at the same precedence level, i.e. integers. This causes an ambiguity in the grammar. Eliminate this implicit conversion, and + is fine.

I read that Objective C does not support mixin multiple inheritance, thus it can not scale a compositional open source model. I don't have time to fully analyze Objective C for all of its weaknesses, but it probably doesn't have a bottom type, higher kinds, etc.. All are critical things for the wide area compositional scale. Thus I assume Objective C is not worth my time to analyze further. I know of only 3 languages that are any where close to achieving the composition scale, Haskell, Scala, and Standard ML. Those are arguably obtuse, and still lack at least one critical feature. I realize this could spark an intense debate. Is this blog the correct place?

@shelby
About the rise of the consultant, what you are refering to is the theory of the firm.

http://en.m.wikipedia.org/wiki/Theory_of_the_firm

@Winter so Transaction Cost Theory defines the natural boundary and size of the corporation. They mention Coase's Theorem. Thanks.

@uma:
I agree if you meant not only FP, but immutable (i.e. pure, referentially transparent) FP. Also must have higher-kinded, compile-time typing, and this can be mostly inferred and unified higher-level category theory models hides behind the scenes to eliminate compositional tsuris, without boggling to mind of the average programmer.

I understand, because I initially struggled to learn Haskell, Scala, and Standard ML. If we make PFP easier, more fun, more readable, less verbose, and less buggy than imperative programming, perhaps we can get a waterfall transition. Note PFP is just declarative programming (including recursion), and declarative languages can be easy-to-use, e.g. HTML (although HTML is not Turing complete, i.e. no recursion). This is premature to share as I don't have a working compiler, no simple tutorial, only the simple proposed grammar (SLK syntax, LL(k), k = 2), and some example code. I found many confusing things in Haskell and Scala to simplify, or entirely eliminate, including the IO Monad, that lazy nightmare, Scala's complex collection libraries, Scala's implicits, Scala's mixing of imperative and PFP, Java & Scala type parameter variance annotations are unnecessary in a pure language, etc..

@john j Herbert:
Apple's gross appstore revenue for 2011 is projected to be $2 - 4 billion. And total appstores annual gross revenue is projected to rise to $27 billion in 2 years. While hardware prices and margins will decline, the confluence thus perhaps lends some support to my thought that the future waterfall decline threat to Apple is an open app Web3.0 cloud. Perhaps Apple's strategy is to starve the Android hardware manufacturers of margins, as the margins shift to the appstore and eventually total smartphone hardware volume growth decelerates and the debt driven expansion starves future hardware demand. I note the battle with Amazon over the name "Appstore". I broadly sense that Apple may be trying to create a captive internet, but I haven't investigated this in detail.

@jmg:
My understanding is that Smalltalk is anti-compositional, i.e. anti-modular, because it doesn't have the sufficient typing system[1], e.g. subtyping, higher-kinds, and diamond multiple inheritance. You can correct my assumption that an object messaging fiction doesn't remove that necessity.

@uma:
Agreed that interleaving FP and imperative adds complexity to the shared syntax for no gain if purity (a/k/a referential transparency) is the goal (and this apparently applies to Clojure too), because in functionally reactive programming (i.e. interfacing IO with PFP), the impure functions will be at the top level and call the pure functions[2], a simple concept which doesn't require the mind-boggling complication of Haskell's IO monad fiction.

Clojure is not more pure than Scala, and is only "configurably safe". A Lisp with all those nested parenthesis requires a familiarity adjustment (in addition to the digestion of PFP) for the legions coming from a "C/C++/Java/PHP-like" history. I doubt Clojure has the necessary type system for higher-order compositionality[1].

My point was we need all of those advantages in one language, for it to hopefully gain waterfall adoption. The "easier" and thus "more fun" seem to be lacking in the only pure FP language with the necessary type system[1], Haskell. And Haskell can't do diamond multiple inheritance, which is a very serious flaw pointed out by Robert Harper and is apparently why there are multiple versions of functions in the prelude. All the other non-type-dependent languages have side-effects which are not enforced by the compiler, or don't have the necessary type system. Type-dependent languages, e.g. Agda, Epigram, and Coq, are said to be too complex for the average programmer, which esr noted in his blog about Haskell.

I agree the IDE tools are lacking, but HTML demonstrated that a text editor is sufficient. I disagree that any of those other languages can become the next mainstream language, regardless how good their tools become, because they don't solve the compositional challenge[1], so what is the motivation for the vested interests of leaving the imperative world then? I think a language that solves the compositional challenge, will "force" adoption because its collective community power in theory grows like a Reed's Law snowball, i.e. it will eat everything due to the extremely high cost of software and the amortization of cost in a more granular compositional body of modules.

@phil:
The prior paragraph derives abstractly from thermodynamics, i.e. economics. State and variables exist in PFP. What PFP does is make the state changes orthogonal to each other[3].

@uma:
The time indeterminism in Haskell is to due to lazy evaluation, which isn't desirable. See "lazy nightmare" link in my prior post for the rationale. Orthogonal to the indeteminism issue, where the finely-tuned imperative control of time is necessary, which btw is always a coinductive phenomena[2] and thus anti-compositional, this goes in the top-level impure functions in Copute.

@Jeff Read:
IO in PFP requires the compositional way of thinking about the real world[2], i.e. a coinductive type. The practical example[2] is easy for the average programmer to grasp. It is just a generalization of the inversion-of-control principle. This stuff isn't difficult, it was just difficult to realize it isn't difficult. Once that "a ha" and it comes clear in the mind, it is like "why wasn't I always doing it this way".

Sections of my site (scroll horizontally):
[1] Skeptical? -> Expression Problem, Higher-Level, and State-of-the-Art Languages.
[2] Skeptical? -> Purity -> Real World Programs.
[2] Skeptical? -> Purity -> Real World Programs -> Pure Functional Reactive -> Declarative vs. Imperative.



@Nigel
It may be true that there are cases where there are transactional costs for uncoordinated software development, that leave captive markets for the corporation. That isn't an indictment on the open source model of cooperation in Inverse Commons which amortizes costs and risks, but imo rather an orthogonal indictment of the technology we currently have for software development.

Hypothetically, if a huge software project could be optimally refactored such that it had the least possible repetition, and if I was correct to assert that mathematically this requires the components (e.g. functions and classes) to be maximally orthogonal, then what would happen to your assertion that only big software companies will ever be able cooperate to create huge projects?

In the theory of the firm that Winter shared, the reason the corporation exists is because there a transactional cost (or risk cost) for uncoordinated cooperation. So what is the nature of that transactional cost in software? Afaics, it is precisely what causes the Mythical Man Month, i.e. getting all devs on the same wavelength, because the code they write all has interdependencies. But if there is maximal orthogonality of code, then the communication reduces to the public interface between code. Also with a higher-level models, such as Applicative, they automatically lift any function of any number of parameters of unlifted types, T -> A -> B -> C ..., to higher-kinds of those types, i.e. you get for free all functions of type (without having to write infinite special case boilerplate), e.g. List[T] -> List[A] -> List[B] -> List[C] and any other class type that inherits from Applicative, not just List. This is the sort of reuse and orthogonality that could radically reduce the transaction costs for uncoordinated development. With a huge preexisting library of orthogonal modules, a small team of the future could possibly be able to whip up large compositions at exponentially faster rate. We have a huge body of code out there today, but my understanding is it often difficult to reuse, because it takes more time to learn what it does, extricate and refactor the needed parts. I have not read Esr's book on unix philosophy and culture, but I think I understand intuitively that it has a lot to do with using orthogonal design, e.g. pipes in the shell with many small utility commands (programs). Although it might seem that code for different genres of programs are totally unrelated, I am finding in my research that maximal orthogonality causes more generalized code.

I can rightly be accused of over-claiming without a working body of code (so I better shut up), and on the other extreme you wrote we can n"ever" progress. I hope I can change your mind someday.

From email:
>> https://goldwetrust.forumotion.com/t151-book-ultimate-truth-chapter-3-capital-is-not-money#4540
>
> Not true. Under the gold standard the money actually buys more over time.

Yeah but that is not what I said. I said the nominal increase is not a proportional increase, i.e. your portion of the entire economy decreases.

However, if gold is the only thing that is money, then your proportion would only decline by the mining rate of gold, and this is why I say we can never (nor should we) make gold the only thing that is money, because then it would mean passive capital owns future innovation.

Read more at Passive Stored Capital is Always Fleeting (Depleting).

http://esr.ibiblio.org/?p=3689&cpage=5#comment-321944

The events leading out of feudalism appear to be attempts to free humanity from the slavery of unmotivated passive capital, whose power was sustained by the marriage of state and religion (which outlawed the "sin" of usury), by using debt to bypass and compete against hoarded private capital. I wrote previously that gold can't be the only money, otherwise passive capital enslaves all the future innovation, because all profits are captured as a deflation relative to gold. Appears mankind has been oscillating between debasement blowback (Roman empire and now) and no debasement motivating capital hoarding (feudalism).

The fundamental problem is that in a material world, the transactional cost in The Theory of the Firm (thanks Winter), enables the corporate capital to accumulate faster than for those who produce the knowledge. However, I think we are entering a radical paradigm shift, where knowledge (the mind) becomes much more valuable than material production. Because industry can be automated (see the $1200 3D printer) but the knowledge isn't static and can't be automated. I refuted Kurzweil's Singularity and debated Chomsky on Hume's mitigated skepticism (the upshot is I argue that abstract math and infinity exist and are equivalent to the never ending universal trend to maximum entropy).

http://esr.ibiblio.org/?p=3695&cpage=2#comment-321961

@Nigel:
hardware prices remain fairly static and manufacturers just provide greater capability at the same price points

The greatest capability increase of the past decade has been the knowledge deposited on the internet, the consumption of which has supplanted compiling as my main knowledge activity. For that purpose, my less than $100 computer (in 1990s dollars) works as well as $1000+ one I needed a decade ago to compile faster (when compiling was my main activity). I am not factoring in the price of the monitor, as these have razor thin profit margins, and remember my point is that profit margins drive the relative "nominal" (global aggregate) profit when comparing hardware vs. software.

And the price is not my point, but rather that per-unit profit margins in hardware are declining, because the economies-of-scale are increasing with the physical production automated (or cheap labor which is in oversupply as we enter the knowledge age). Thus the aggregate profits are becoming a relatively lower percentage of total profits in the world, when the comparison is between industry in general versus software and knowledge production. In short, all profits are derived from the knowledge portion of the business, not the physical production.

Apple apps still perform better than open apps regardless of underlying technology

That debatable quality advantage isn't sustainable, because the Inverse Commons has proven numerous times to be the winning economic model.

@Winter:
So, in the end, it is work that will pay. In the end, work by the hour.

Knowledge is can't be automated. Labor by-the-hour is not correlated with knowledge production, and is often anticorrelated, e.g. the Mythical Man Month. The belief that labor and knowledge are equivalent is the fallacy of communism.


Last edited by Shelby on Thu Sep 08, 2011 1:28 pm; edited 5 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Eric Schmidt says Google+ is to track your identity

Post  Shelby Wed Aug 31, 2011 7:35 am

http://esr.ibiblio.org/?p=3500&cpage=1#comment-320303

Note the original "issues analysis" link can no longer be read without login to G+. Is the future of the internet that we can't access information without tracking our identity?

Eric Schmidt says that G+ is really an "identity service, so fundamentally it depends on people using their real names if they’re going to build future products that leverage that information" (presumably for an advertising database). Is that assured "do no evil" to create a centralized global database of identities and track all the social groupings (i.e. interests, political sub-groupings, business affiliations, interest in certain ideas due to link crowd-sourcing, etc)?

I propose we in open source can create our own open decentralized social network, without depending on a large corporation.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Chile is giving away $40,000 to startups that relocate to Chile

Post  Shelby Thu Sep 01, 2011 12:18 am

I inquired:

http://www.startupchile.org/2011-round-2-now-closed/#comment-5193

Shelby wrote:
Would the outline of my startup at my website be adequate? (scroll horizontally)

You can also read my numerous comments at this blog of Eric Raymond, the creator of the open source movement:

http://esr.ibiblio.org/?p=3634#comments

Does this offer a path to permanent residency and citizenship?

I don’t need the $40,000, and I don’t have a lot of time to waste. But I am very interested in South America and helping to lead your developers.

My project is most definitely global, I expect it affect everything.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Anonymity, Google, and future of the world

Post  Shelby Thu Sep 01, 2011 11:22 am

Regarding this post:

Eric Schmidt says Google+ is to track your identity

Shelby replied in email:
Nothing is free ever. Liberty yes, but zero cost doesn't exist.

Technically speaking, you are wrong when you say a decentralized, anonymizing network can't limit the degree of invasion of privacy. Although it is true that a determined hacker (such as the govt) can always break through anonymity shields, in real terms it can raise the reasonable cost for them to do it, such that 99.9% of the people with be effectively anonymous. There is a huge difference between that and what Google is doing. HUGE MY FRIEND. VERY HUGE.

One thing you didn't realize is the independent actions in free markets look random and disorganized. The structure is hard to identify, so it is difficult for those who own the connecting wires to use the information, because it won't mean anything to them. Yet still the free marketing anneals a global optimum result:

http://esr.ibiblio.org/?p=3614#comment-320296

Please don't say it will never exist, unless you can justify that statement technically. I have been studying and thinking about P2P and decentralization since at least 2006. Heck I even explained to BitTorrent that their economic algorithm was flawed:

http://forum.bittorrent.org/viewtopic.php?id=28

I am implementing that decentralization now with Copute.

Copute will eliminate the "power of top-down control", which you pigeon-hole as being only psychiatry.

Apparently you don't understand that there are degrees of anonymity on the internet. At the extreme, one logs in from an internet cafe and refuses cookies in his browsing and never logins in with any identity to any site. Next level is to use a VPN and then at least authorities have to raid your VPN provider to get IP correlation to your identity.

The degree of fine-tuned information is much greater with Google because it is every where (I touch their domain on nearly every page I visit). I will give you an example.

Google writes a cookie for *.google.com, then you login to any Google service, including blogger, Gmail, etc, then you logout. Now google can correlate your exact identity via that cookie whenever you surf to any website that has an advertising coming from *.google.com

Additionally google+ using social crowd-sourcing, will be able to narrow down your personality profile and other statistics about you to such a degree that they will perhaps know you better than you know yourself:

http://esr.ibiblio.org/?p=3614#comment-320285
http://esr.ibiblio.org/?p=3614#comment-318814

Should someone with connections with the "authorities" want to get a list of all people who oppose something or are vested in some competing business or movement, then in theory they can under the dictatorial powers created by the various executive orders and laws passed since 9/11.

Also I am planning to do something about this. That is the reason I raise this issue. There is an alternative coming.

Btw, the progress on Copute is going amazingly well and I expect to start launching and changing the world before the end of 2011. Here is the latest code samples and you can read my numerous comments at the following blog of Eric Raymond one of the main proponents of Open Source (he wrote the imfamous The Cathedral and the Bazaar):

http://copute.com/dev/docs/Copute/ref/std/
http://esr.ibiblio.org/?p=3634#comment-319373 (Ctrl+F for "Shelby" to find all)

Note Google says they anonymize data after 18 months, but since we are always renewing the data, that effectively means forever.

> A non-issue.
> So what if Google+ forces real names.
>
> Anyone who has been online for 3 months (probably less) has his identity
> known, at least in the US and most of EU. Many people believe thet they
> are "annonymous" when online, or they believe that what they do is not
> associated with their real identities. Hogwash. The US and any other
> allied govts have long ago employed systems that monitor Internet
> traffic, such as Carnivore, Eschelon and other secret systems.
>
> Yes, a person can be somewhat annonymous up to the point where a govt
> decides that they want to identify "suspects". How do you think they
> catch hackers and catch members of Anon and other hacker groups? It's
> not done by legwork. It's done by rapidly culling databases of captured
> traffic. It's takes longer to id these guys because they route all
> their traffic via onion networks, but even those networks are vulnerable
> to the sophisticated govt monitoring systems.
>
> The same data gathered by Google, Facebook and other Internet sites is
> also gathered in physical locations like malls, stores, gas stations,
> grocery stores, etc. Ever since plastic has been used as a payment
> method, these corps have been collecting and sharing their databases of
> consumer activity.
>
> Go to a movie, buy the ticket using anything other than cash and your
> interest gets recorded. Buy gas, throw in a package of M&Ms and your
> candy choice gets recorded. Do that several times over the course of a
> few months and you will start to receive snail mail ads from the candy
> company.
>
> Almost every purchase made today has a clause in its purchase agreement
> that "some info will be shared with partner organizations."
>
> And when you "follow the money" you discover that there are just several
> very large corps at the top which own and control all the smaller
> corps. Thus, the term "partner companies" becomes "just about every
> other company out there".
>
> Those that scream and natter about "my privacy is being violated"
> usually always have something to hide, some criminal activity that they
> are involved in of were involved in in the past, and don't want to be
> found out about. They either murdered someone or they cheated on state
> tax returns, or they yelled at the wife for flirting with the neighbor
> in 1978. Or any transgression inbetween the extremes.
>
> I am not saying we should not stand up & fight for our human rights. I
> am saying that it is unnecessary to focus on Internet data collection
> and more viable to focus upon methods of changing the world that are
> effective. A real, free, secure, decentralized networking system will
> not change the world. No such system has ever existed nor will it ever
> exist unless one also owns and controls the Internt itself. I am
> talking about the actual Internet by definition, which means the
> hardware, infastructure, satellites, cables, towers, etc. Unless one
> controls those, any govt, Internet provider or large powerfiul
> corporation can use the infastructue as they see fit (monitor the
> traffic and collect the packets).
>
> "In the interest of national security" is something the US uses to get
> its hands on anything they want, including Google's algorythms. You
> think Google wins aginst the US govt security agencies. Ha! What we
> see re technology legal battles in the media is only what they allow us
> to see.
>
> There's really only one surefire method of changing the world into one
> where human rights are a reality: Eliminate the groups and individuals
> that are responsible for causing the mindset of humaniy to believe that
> man is but an animal to be controlled. This philosophy stems from
> Wundt, Pavlov and Marx, and has permeated every major facet of human
> activity. Socialist and Communist ideals bring about the downfall of a
> civilization.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Industrial age is dying; corporations will die; software will rule

Post  Shelby Fri Sep 02, 2011 7:59 am

> Shelby,
>
> Well....I looked at the Forum and not much has changed.
>
> Is there any ANALYSIS of individual stocks? (I did the CAPS because I see
> we lost SRSrocco)

I don't think you should be investing in mining stocks, because we are coming to the end of the industrial age.

Software will overtake everything by an order-of-magnitude.

The physical world is fading away.

Physical things are not as efficient.

Hunter & Gatherer Age
Agricultural Age
Industrial Age
Knowledge Age (software is encoding knowledge)

Just as it wasn't worth investing in hunting & gathering in the prior ages, not worth investing in agriculture lately, so it won't worth investing in mining stocks.

I was an early employee of both PayPal and Eventbrite. I know how that works. ANyway, back to metals and miners.

What is your point?

Paypal didn't really increase knowledge much, because they were just an extension of the credit card system. They didn't really impact software.

Eventbrite was part of an evolution to increase the online knowledge about live events, such as concerts. That is a mild contribution to the software knowledge, but not very significant.

Whereas Amazon has the server commodity business, where they sell server time as a commodity. And I think as my Copute destroys Google and Facebook, you will see Amazon's commodity server model rise to the top.

And I am becoming more confident about Copute in the past few weeks as the design and implementation is come together to exceed my expectations. I am serious about the destruction of the large corporations. Read my comments on the following two blog pages:

http://esr.ibiblio.org/?p=3634#comments (Ctrl+F then search for "Shelby")
http://esr.ibiblio.org/?p=3689#comments (Ctrl+F then search for "Shelby")

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Counter-example to "Lazy more compositionally efficient than eager"

Post  Shelby Fri Sep 02, 2011 7:09 pm

http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#992454843212471185

Come to find out that I can get the same efficiency as lazy does for 'or . map p', by using Traversable in strict language, where I have a version of 'traverse' which inputs a function that returns Maybe (i.e. can terminate the iteration).

http://esr.ibiblio.org/?p=3634&cpage=4#comment-320849

I am becoming more confident there are no mainstream use cases where FP has disadvantages. And the advantages are beyond orders-of-magnitude, with the power of Reed's Law for composition. Imperative programming isn't generally compositional, and that is fundamental.

I recently explained the O(1) solution to the n-queens using an immutable random access array, which provides evidence that immutable is not likely to be log n slower in mainstream use cases. Today I published a tweak to Traversable so that lazy is not more efficient than eager for FP composition on combinations of mapping and folding.

As best as I can tell, no one other than me, is thinking about the separation of interface and implementation in FP. If anyone knows of anyone, I want to know who they are. So that could be the reason no progress has been made. I had a huge learning curve to climb, starting circa 2008, and I didn't really decide until 2011 that I had no choice but to create a language.

Also I could not achieve my work without Scala. There is no way I could reproduce all that effort by myself in any reasonable time-frame, so initially Copute will compile to Scala, and let the Scala compiler do as much as possible of the heavy lifting. My gratitude to the Scala folks is beyond words. I understand that Odersky's model was to throw in everything including the kitchen sink, because it was a research project to demonstrate that generality. My goal is different. Again too much talk on my part, but I am excited.

And the point that Scala wasn't well known and fully capable until about 2008 roughly, maybe 2006 sans some key features.

Some progress has been achieved, but not so mainstream yet. Twitter is deploying Scala and other boutique JVM languages throughout their server-side.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Why pure languages are better for strings

Post  Shelby Sun Sep 04, 2011 7:17 am

http://esr.ibiblio.org/?p=3634&cpage=5#comment-321156

Shelby wrote:
@The Thinking Man:
The original complaint wasn't about the semantics, but rather the verbosity of:

Code:
NSString *test = [myString stringByAppendingString:@" is just a test"];

The distinction you've raised is not a justification for the extra verbosity of Objective C, but rather the raison d'être for immutability, i.e. purity, a/k/a referential transparency. In an pure language, no object can be mutated, which eliminates the spaghetti nature of cross-dependencies of functions, i.e. any an impure function which inputs a string and appends to it by mutating its input, has modified its input and thus has a dependency on its caller, which spreads out into the program like spaghetti.

In pure language, the string is represented as an immutable list, thus the '+' operator for strings, is the construction operator for a list, which eliminates the wasteful copying and eliminates the other problems you raised:

Code:
List[String] = mystring :: "is just a test"

Pure FP isn't generally slower, and in the above case it is faster except for concatenation of very small strings (which could thus be automatically optimized by the '+' operator to be a copy to new string of only the small portions), because it forces us to use algorithms which make more sense in overall compositional perspective.

Thanks for providing a clear example of why pure FP is what we should all be using.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Copute's economic model versus capitalists

Post  Shelby Sun Sep 04, 2011 8:25 am

http://esr.ibiblio.org/?p=3689&cpage=3#comment-321161

Shelby wrote:
@Winter: I have a longish post stuck in moderator queue (because it is too long), and I wanted to be clear that the point I am detailing in that post, is that the social structure is not strong in the collectivist model, because it is based on false pricing of relative value. And that also is the essence of the suburbia point I am making. And my overall mathematical point, is that the way collectivism spreads is primarily via the debt model. This is what makes it so impossible to get rid off, and why we end up talking a lot, because we can't fix it. However, I think there is a technological solution, that won't require any more talk.

@Jessica et al, and this false pricing of relative value explains why education is so messed up and concurs with your and Monster's points. Winter, in a non-collectivist, free-market system, the kids and parents who are unmotivated, wouldn't be forced to attend, and the society wouldn't be forced to waste its resources. Collectivism always involves force. Free markets involve choice. When people can choose, resources are not wasted, because ultimately no matter how much we force people, people are only productive the areas they (like and) choose to be. Not everyone is destined to be a highly paid profession, some will be more lowly paid janitors with "on the job" education. In such a competitive model, people have an incentive to improve their education, if they so desire.

Here follows the long post referred to in the above comment.

This will be a bit long, because this requires deeper conceptual exposition.

@Nigel: I misspoke, Ecole Classique is only $5k/yr for high school, competitive AA athletics, and Latin.

I assume you are conflating the price you pay for subsidized wireless, with my assumed higher actual cost of providing of lower density service. And/or it may be that the cost advantage diminishes above some population density, and this would not disprove my unproven theory that providing low density wireless may not have a positive real rate of return on capital.

I don't know if there is an inherent density limit for wireless, or it may be that the mean density of the area around the stadium is much lower than the peak density during event times, and thus sufficient hardware for peak density is not economically justified.

Agreed about processed sewage because the primary cost is the ongoing output per person. We have septic tanks here, which is better because it eliminates another vector towards collectivism and mis-pricing of relative value. Passive sewage systems can be more healthy and produce free methane to power heating and cooking. But of course your local planning board won't allow it, because there is no income for them. Your relatives left Philippines, because they wanted to come avail of the subsidized economy, where they can earn debt-inflated higher wages. But a reversion is coming.

The raw materials cost was not lower for larger sq ft housing, and the land, taxes, permitting, labor, etc.. cost significantly more than without the debt subsidy. And this is precisely my overall point, that this phenomenon is a disconnection of price from resource cost (and relative knowledge value in the society), caused by the debt and government subsidy economic model. It is a problem of uniform distribution, or socialism again. If everyone can get a loan to have the same size house, then all the inputs to housing go higher, and disconnect from the resource (relative value of knowledge) efficiency. Refer back to my prior comments on this page and the prior few blogs, where I explained that uniform distributions are the antithesis of progress, knowledge, efficiency, and prosperity. For example, when a programmer such as yourself who is adding much more knowledge to the economy, has to pay programmer level wages for basic services which do not add as much knowledge, then the economy becomes unproductive (as is the case now with huge and increasing debts). If everyone is rich, then no one is. The fallacy that if everyone is rich, then everyone is more prosperous, is the lie of pulling demand forward with debt, which always ends in loss of production, because of the mispricing of the relative value of knowledge.

Catherine Austin Fitts calls this the Tapeworm economy, where the productive capital of the locals is siphoned off to the capitalists far away.

A definition of insanity is doing the same destructive thing over and over again, and not realizing it. This is the nature of collectivism, and debt is the primary enabling mode of denial.


@Winter: I agree, the "collective idealism" never goes away, because the people never learn. That is why I said there won't be any breakup of the EU, instead the Europeans will prefer authoritarianism, probably leaning towards totalitarianism because of the cultural of idolizing of social contracts. Germany will bailout the PIIGS because they need markets to sell their industrial goods and so they need to force the PIIGS to borrow more money. And Germany will decide that they must force those PIIGS to make the necessary social changes so the PIIGS will produce as much as they spend. Just more top-down futility that ends up likes the 1920s and ultimately 1940. Can you help me to understand, why is this culture so widespread in EU? My vision of American culture used to be one of individuals can do for themselves.

Money is very important, because when it does not represent fairness, then the society becomes violent after a falsely priced utopia. For example, the fractional reserve fiat system is based on the debasement via inflation tax of debt. When it is done by an authoritarian central bank as is the case every where in world now, instead of the free market of multiple fraudulent private banks as was case in USA in 1800s, then people have no choice. This continual transfer of the real capital of productive people to the capitalists, destroys the information in the society about relative value. Thus the society mal-invests and even mal-educates (that debt and welfare is optimal, etc), which results in deeply embedded destruction of fairness and productivity. It a curse so hurtful to the people in it, that I think I know what hell is. Did I answer my own question about what created the culture of Europe? But why do people get stuck in this culturally embedded curse? Well for one reason people think it is more fair, because they have this delusion that man can create superior top-down social control. Buy why are people so stupid? I think it is because if there were no stupid people, then there would be no smart people. So I just have to shrug my shoulders and say "thanks for giving me the opportunity to devise a technology that is more intelligent, rich, and fair". I should note that people do not really have much choice, because money is a social contract, due to the critical importance of fungibility, and thus money will naturally trend towards centralization. This is why I started to look for technological solution to money.

Those capitalists who have the delusion of being a superior race, software is coming and they can't capitalize knowledge, only the capital intensive hardware for deploying knowledge. Their futile useless slavery capital will wither as software is owned by the minds where it resides, because it is never static. And none of their attempts to capitalize the internet can help them. It doesn't matter how much they play their role in the fascist curse, they can't stop the fledgling age of software.

Winter, I think you want people to prosper, so I think you will abandon your love of top-down social contracts in time. Someone just needs to demonstrate another economic model that is working better. Given the unfairness of the collectivist fiat model, the only consolation scraps for the slaves, is the welfare state. So it is not surprising that you would view that as the only option. I come as a lamb in wolfskin, wide-eyed, bearing gifts of a nascent technology and economic model for uncoordinated cooperation. Would you not embrace a working solution that vests mankind in his own capital-- his mind?

Followup:

@Winter: A summary of my link is that a collectivist promises that which is impossible to predict, the future, and thus is a habitual liar. The free market promises nothing, not even freedom, and thus is a habitual truth-teller. I wouldn't want to be known as a liar.

@Winter:

1. So you've conceded that collectivism is about always about force.

2. What is the equation that tells with certainty which skills will be in demand and not oversubscribed in the distant future?

3. If the state forces uniform indoctrination, then how does the workforce adapt to the unknown future? What if the future is we are invaded by aliens and they only need native tribes who know secret herbal medicines, and they kill the rest? What if they only let cannibals live? What if they only let collectivist live? Thus I am okay that collectivists exist, and encourage you to add more to your ranks.

4. Nature requires many finely grained variable outcomes (degrees-of-freedom) in order to anneal the globally optimized best fitness to dynamic outcomes. Less diversity is less degrees-of-freedom, and thus the system can't optimize. This is a scientific proof that collectivism is for those who like sub-optimal results.

5. Even parents can't force a child to be interested in something the child is not interested in, so neither can the state. What the state is good at is making kids so bored and uninterested, that many rebel and hate education. The state can buy off everyone, and this is called a debt bubble. But I think that power will diminish soon with the rise of a new technology.

6. The educational market is efficient now. Proof is I opted out, which has been very efficient in many facets.

7. I didn't write that I am not trading. I decide who and how I trade most efficiently. For the moment, apparently you think you are trading more efficiently in a collectivist model, but that might permanently collapse soon (if I am correct that industrial age is permanently dying). Living together does involve some compromise, but only to the degree that my needs and desires inhibit those of others, e.g. if my neighbor plays their karaoke at 120dB. But I have learned it is more efficient to not involve govt gridlock, and simply talk to my neighbor, or move if we can't agree (which I have done a few times). To the extent I am impacted by the poor education of others (not much), would be more so under this poor education system. However, I see this as an opportunity. I see all those frustrated kids, as fertile ground for my new computer language. Hopefully others also see economic opportunities in providing ways to side-step the morass of govt.

8. The fear-mongering about what would happen without universal, uniform education, is actually the outcome that we are getting with it, and the opposite of the outcome that basic science of fitness, entropy, and annealing says we would get without it. I could explain why in more detail, with numerous examples, but it won't change your mind. The only thing that will change your mind is to diminish the power of the state and the corporation via some technology (e.g. which makes it technically impossible to tax knowledge). Then you will have no choice. That is a theoretical possible outcome of my current work.

9. Regarding #7, I think you are going to be shocked at the severity of the death of the industrial age. It is going to destroy your world. I hope you find the knowledge age. And it will be the antithesis of the collectivist age which relied on material control, industry, and physical assets capital.

This discussion continued at the following link, with "666 will be the collectivists last attempt to tax, as they destroy the industrial age":

https://goldwetrust.forumotion.com/t37-666#4565

http://esr.ibiblio.org/?p=3695&cpage=2#comment-321993

Shelby wrote:
@Shelby:
...my less than $100 computer (in 1990s dollars)...
...because the economies-of-scale are increasing...
In short, all profits are derived from the knowledge portion of the business...

So this requires those who produce knowledge must be wealthier than those who don't, otherwise these economies-of-scale (driven by a large population and the declining costs of physical production) are wasted in the collectivist redistribution scam.

The propaganda about overpopulation is merely the calls of passive capital for a bailout from the knowledge age. But no one can bail them, game over, check mate. Do they even know what time it is?


Last edited by Shelby on Thu Sep 08, 2011 4:47 pm; edited 1 time in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Unthink & Diaspora

Post  Shelby Mon Sep 05, 2011 9:54 am

http://esr.ibiblio.org/?p=3500&cpage=1#comment-321307

Unthink is not the breakthrough, because afaik it isn't open source, nor decentralized server databases. Diaspora may be.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Color Kindle, iPhone, Android competition is really about the software content

Post  Shelby Wed Sep 07, 2011 12:32 pm

http://esr.ibiblio.org/?p=3695&cpage=1#comment-321731

Shelby wrote:
The future is in the software that runs on the devices. Thus the competition is really about the model for leveraging the content of the devices.

Amazon: closed-source portal marketplace
Apple: closed-source one-stop marketplace
Google: open-source marketplace, closed-source derivative ad server marketplace

Google doesn't need to care so much about device content, they only need the internet ad market to continue expanding. Amazon doesn't need to care so much about the qualities of content, they only need the content providers market to continue expanding (which is why I think they offer the commodity server business). Thus Amazon is working for Google. Apple has to micromanage content, because their model depends on their quality being differentiated from the others.

Clearly Apple is more vulnerable to disruption than Amazon.

Thus ultimately they are all working for us– we the open source programmers

Apple: devices – uses content and services to sell devices

That is a subset of the more general model I offered, which is that Apple needs to control content quality in order to differentiate their products. They require that their products be differentiated on quality, in order for the total "Apple experience" to be differentiated. Otherwise there is no compelling reason to enter their jailed garden, where more open markets exist for Android and Amazon. Please realize that as hardware costs decline, all significant nominal profits will be made in software, not in hardware. Apple is thus the most vulnerable to disruption because they depend on being able to offer less choice and higher quality, but that isn't the nature of open source Inverse Commons, where high quality follows from more open choices.

Amazon: store – uses devices and content to sell stuff to people

That is a superset of my model which has no predictive power, because all 3 of them use devices and content to sell something, Google sells derivative ad servers. The significant predictive power quality of Amazon's model, is that is a portal and not a one-stop shop. Thus they are leveraging the devices (Kindle) and commodity servers (EC) businesses to drive more partner websites. Hopefully you have noted that Amazon takes orders for 1000s of partners, where you can find the same product for sale on the partner's website.

Google: ads – uses devices and content to sell eyeballs to advertisers

Yup, and I hope you get the point that the derivative model is immune to need for an result with the devices, other than that no one else can monopolize the devices to block ads. And that the main goal of Google is to lower the cost and increase the number of devices. Google doesn't need to win the device marketplaces.

So two are “working” for me. One is “working” for someone else.

That is the simplistic and obvious relationship, which doesn't capture the more significantly derivative economic relationship that I outlined.

I noted that ultimately they are all vulnerable to open source business models, i.e. Apple is vulnerable to "Web3.0 open apps" (assuming they won't be limted to Flash nor HTML5, but fully programmable), Amazon is vulnerable to open source exchange marketplaces (e.g. what I am working on), and Google is eventually vulnerable to open source decentralized social networking (e.g. Diaspora) where rent-free ad serving and relevancy is inherent in the network.

And they are all working to expand the market, which gives more economy-of-scale to our open source markets. Since closed-source can't scale, I can boldly claim they are working for open source programmers.

Whatever makes people stick to MS Windows, it is NOT the quality of MS technology

Another major factor is the browser runs fine on XP. So this opened up an unlimited world of expansion that didn't require change or OS differentiation. Also most people are task focused, with the minimum disruption possible. An outlier OS presents stumbling blocks along the way, and offsetting theoretical efficiency gains are difficult for the individual to mentally amortize across those. Apple solved this by being a first mover in a new device space.

This is another reason that I am trying to kill project granularity of composition, and this even applies to open source. We wouldn't get so stuck if programs could mutate incrementally "with enough monkeys banging randomly on the source" (paraphrased quote of Esr).

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Copute's Beginner Tutorial

Post  Shelby Thu Sep 08, 2011 2:47 pm

http://esr.ibiblio.org/?p=3695&cpage=2#comment-321973

Shelby wrote:
@Nigel:
but it’s not going to get disrupted in this fashion anytime soon

Disruption is often a waterfall event, that isn't expected by most observers. This is because it is often a surprising technology paradigm-shift, and the nature of new technology, is that it isn't well known before it is created and proven in the market.

I am specifically working on the technology that indirectly could possibly disrupt Apple's "native apps better than web apps" model, and more directly cause imperative programming such as Apple's required development system to suffer an orders-of-magnitude loss in relative productivity, and last night I rough drafted a beginner's tutorial to explain it (section Copute Tutorial at bottom of page). The library and compiler are progressing. Is the 5 line code example in the tutorial not compelling, elegant, and intuitive enough for rapid adoption?

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Technological End of the Corporation, Passive Capital, and checkmate for the NWO

Post  Shelby Sat Sep 10, 2011 2:46 pm

UPDATE: a new post on Entropic Efficiency summarizes perhaps better than this post does.

I have been working 15+ hours per day literally (no joke).

For example, I am working on Copute and I think it could disrupt their plans. Others may be working on disruptive technologies in other fields.

Copute is more likely to disrupt in the sense of "smaller things grow faster". In other words, let TPTB have control over what is a dying morass (the industrial age). Let's take control over the things that are growing faster and are the future.

I think I have the technology that will render passive capital useless, and thus their control over passive capital will not help them. The reason is because software is in everything now, even toasters. It is related to the Theory of the Firm, which says that as long as there exists a transaction cost, then the corporation can exist. When corporations exist, it means that passive capital (the capital of the owners of the corporation) is able to grow faster than the capital of the individuals who actually do the work.

http://en.wikipedia.org/wiki/Theory_of_the_firm

What Copute does technologically, is give rise to an economic model, which eliminates the transaction cost for software integration. Thus the corporation no longer has any economic viability. Thus the capital becomes the ongoing minds of the programmers (because software is never static and thus the value isn't the code, but the ongoing knowledge of how to adapt, improve, and integrate it). The economic model is explained here:

http://Copute.com
+ What Would I Gain?
+ + Improved Open Source Economic Model

This uniqueness of Copute's technology is explained in easy words for a non-programmer here:

http://Copute.com
+ Copute Tutorial
+ + Meaning of Computer Language

Think about this too. All engineering disciplines, even marketing, are encoding their work in software.

At the following link I explained more about what passive capital is and its link to the enslavement of mankind.

https://goldwetrust.forumotion.com/t44p75-what-is-money#4580

In addition to Copute, I had been working on a Theory of the Universe, which has apparently been proven to true by another scientist and many scientists are now embracing it. He proved Newton's laws of gravity derive from information content, just as I had theorized! Wow! See the following link for details:

https://goldwetrust.forumotion.com/t124p15-theory-of-everthing#4581

Here follows an email discussion which was in response to the following link about the timing and method of the coming NWO currency (code-named the Phoenix):

https://goldwetrust.forumotion.com/t174-big-picture#4590

Note if you have read the above links previously, you may want to scan them again, as I made some important edits that are clarifications with those prior posts, e.g. the world currency will initially be for international payments only (not all payments as I had originally implied).

> Maybe these assholes will not even be around
> anymore by that time. Who knows. If you had asked
> Louis XVI in 1788 how he sees the future, he
> would have given a slightly different outlook
> than what was to really happen. Same with the
> German emperor in 1913, or the Ottoman sultan, or
> the Russian Czar, or Hitler in 1941, or Kennedy
> on November 21, 1963, or average New Yorkers on
> September 10, 2001, and many others.
> If you believe the SOB's crap too much, you
> predispose yourself to walk on paths created
> specially for that purpose. And if enough people
> believe it, it may actually happen. Think pure,
> act pure. Think dirty, act dirty. They are
> entrapping you with the filth they lay out for
> you. It's up to you not to go for it. We can
> renounce them with what we will not do.
>
>
>
>>Ridiculous except that TPTB published their 2018 prediction (for launch
>> of
>>the Phoenix) in the Economist magazine in 1980s, and so far it looks
>>like they are right on schedule. Switzerland just joined the EU
>>monetary union. Once Swiss central bank distorts the Swiss economy by
>>buying up $100s of billion of Euros, then Switzerland won't be able to
>>unhitch from the EU currency bloc.
>>
>>
>> > Do you realize how it sounds to make predictions
>> > 7 and 13 years into the future?
>> > To me, rather ridiculous.
>> >
>> >
>> >
>> >>The NWO currency will be a payments only currency from 2018 to 2024, i.e.
>> >>the national or bloc currencies will float against it (the FOFO position
>> >>but not 100% gold backing), then it will become the actual currency
>> >>in 2024. While it is not an exclusive currency within the nations, the
>> >>nations will continue to be wracked by the sins of debt and
>> >>fiscal deficits, so this will push people towards opening accounts
>> >>denominated in the world currency, until it is dominant by 2024.


Here follows an email discussion about what is passive capital:


>>I think I have the technology that will render passive capital useless,
>>and thus their control over passive capital will not help them.
>
> Passive capital is just the portion of their
> wealth that they don't actively use to control
> commerce. The rest is active and people have no
> way to get around it. The passive portion can be
> mobilized anytime if there is danger to their monopoly.
> So I don't see how software can do anything to
> weaken their grip on commerce. You have to buy
> gas from their gas stations. You have to buy food
> and clothes from their stores, etc. How will software make a dent on that?


Good question. Thanks for letting me see where I need to explain more.

First of all realize that "active" means that you are producing value with your own activities, i.e. an engineer creates a new design or an investor researches the technology before he invests and constantly evaluates the technology's progress. Passive capitalists don't do this. They want huge economies-of-scale, where they just count beans in an accounting calculation. They don't want to be bothered with actually creating the knowledge, they think the knowledge workers are just their slaves.

The profit margins on commodities, and even now on manufacturing, are close to 0 and actually I have read that most of China is operating with razor thin to negative profit margins in some cases. China values employment quantity more than profit margins. And now even most of that is being automated and costs reduced further, which will further shrink profit margins.

I am not referring to the knowledge that goes into engineering new designs, but realize this is spread out over billions of consumers now, so that cost is insignificant when it comes to these commodities and basic highly replicated things that everyone uses.

So thus the profit that comes from commodities and industry, is dying. It is being automated away to 0.

Thus the bigger slice of the current economy is really the knowledge businesses, which are the various engineering disciplines. But these all rely on software to encode and develop their knowledge base and work output, and most of them are actually software (e.g. biotech, nanotech, etc).

This is why the elite capitalists have been becoming so aggressive at this juncture in history. Their businesses are becoming unprofitable (they own all the industries because they capitalize them via their fiat debt system which they control, so they are the owners of that system and are responsible for its aggregate profitability), and the only way they can maintain their profit margins is to use methods of control which eliminate competition so that they can charge prices that are much higher than the cost of production. They can get away with this for as long as the people have no way to earn an income by working out of their own home individually (producing their own contribution to the total knowledge of the economy).

People can't do that now, because the "transactional cost" (go read that Theory of the Firm to understand the meaning of the term) of producing knowledge forces people to work for a corporation. Examples of transactional cost, are people leave a project, then the project dies, so there needs to be a corporation which can keep things organized in such a way that individual actions don't stop progress.

But with my technological innovation, the individual's contribution becomes referentially transparent. Which means basically that everyone can contribute individually without adversely affecting the progress of the whole system. In fact, under my innovation, then in theory the system's progress will accelerate by Reed's Law factor, due to the networking effects of these individual contributions leveraging each other. This economic model wasn't possible without referential transparency, because the conflicts of interest among individuals would cause gridlock in the overall system, because the individual contributions could be a like a spagehetti or domino cascade into all the other contributions.

This is basically what I was driving at a year ago with my research into the Dunbar number of limit of humans and the effects of transactional cost.

This innovation reforms the large from the small level, i.e. it is naturally grass roots revolution, because it doesn't require anyone to organize anyone else. Each person works to maximize his/her income individually and it causes the decentralized system to become most of the value in the world economy.

TPTB won't be able to do anything. Their capital and control will be useless, because you can't buy the minds of individuals, once they have a way to maximize their income producing knowledge without the need of the corporation.

===========================
Another email exchange with a different person:

> Money is a symbol. It represents something. It represensts objects or services that can be exchanged.
> It also represents objects that can be produced and it represents production itself.
>
> In Asia, for example, the people consider that the action of working is more valuable than the object that
> is being produced. That's how you get people to work for a few dollars/day. That's how Japan rose from
> economic defeat after WW2. It is understood that work itself is what is valuable, not material objects.
>
> People in the West are more materialistic and consider that a job is something that is owed them, and that
> the reward of work is money.
>
> Prior to the industrial revolution of the 1800s, the value of a resource was determined solely by its
> available quantity.
>
> As a result of the tremendous quantities of resources and produced goods since the 1800s, the value of
> these things is determined using lies and falsehoods.
>
> Any knowledge, any object, any production can be equated to and related to man's survival. Those things
> that increase his survival have more value to man than those things which lower his survival.
>
> TPTB control knowledge in may ways. For example, it requires 18-20 years of education to become a
> doctor that may legally practice mediciine. Yet that knowledge may be found in any public library and
> studied and learned well in a few years.
>
> When you get to the point of having to place a value on Copute, i.e. it's economic worth and its value
> as an exchanged thing, think in terms of its ability to increase or reduce survival.

What I think will happen is that once the transactional friction that gives rise to the Corporation is erased by referential transparency of Copute, at least with software and derivatives of software (which is basically everything since s/w is just recursive logic), is that the value of knowledge will be more free market priced, and thus "what was produced" will be determined by the market's valuation of each module of logic.

Thus I have already solved that issue of the pricing being decentralized market driven, and thus the technological inability to manipulate market pricing.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty First-class disjunction type in Scala (and thus in Copute)

Post  Shelby Sun Sep 18, 2011 8:29 am


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty How Engineers, Marketing & Design, Mgmt view each other

Post  Shelby Sun Sep 18, 2011 11:24 pm

This is funny, but true.

http://twitpic.com/5xs1vy
Computers: - Page 9 Devs_v10

One of my talents is I span all 3 disciplines, especially stronger in design than most engineers (I can draw, layout, anticipate market desires/emotions, etc), but my mgmt style is minimalist. I would also brag that I am a strong engineer in the sense of identifying the main focus, simplifying, and achieving it. But pride comes right before the downfall, so this bragging is hereby redacted.


Last edited by Shelby on Thu Sep 29, 2011 3:07 am; edited 1 time in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 9 Empty Re: Computers:

Post  Sponsored content


Sponsored content


Back to top Go down

Page 8 of 11 Previous  1, 2, 3 ... 7, 8, 9, 10, 11  Next

Back to top


 
Permissions in this forum:
You cannot reply to topics in this forum