GoldWeTrust.com
Would you like to react to this message? Create an account in a few clicks or log in to continue.

Computers:

Page 3 of 11 Previous  1, 2, 3, 4 ... 9, 10, 11  Next

Go down

Computers: - Page 3 Empty server farms can't adapt to dynamic change, Re: IETF is trying to kill mesh network

Post  Shelby on Sun Aug 29, 2010 11:36 pm

Read what I wrote about asymetrical bandwidth in 2008:

http://forum.bittorrent.org/viewtopic.php?pid=196#p196

I was criticizing bittorrent for socialism design. I think they may have fixed it due to my input?



>> Isn't internet same as a bifurcating tree?
>
>
> Yes, I believe it is.


[snip]


>> > Speaking of which there was some genetic algorithm
>> > program that someone wrote, to distribute bandwidth to
>> > a number of nodes from a centralized location. There was
>> > some cost associated with the capacity of each connection.
>> > Maybe it was delivering electrical power. It turned out a
>> > centralized generator feeding each endpoint on identical
>> > capacity wires was horrendously inefficient. It ended up
>> > evolving into a far more efficient tree like arrangement.
>> > I wish I could remember the source, the associated pictures
>> > were fascinating.
>>
>>
>>
>> Find it!!! I need that to refute those who think the
>> server farm is more efficient!!!!
>
> I found it, you're going to love it.
> It's part of Richard Dawkins' TheBlind Watchmaker".
>
> http://video.google.com/videoplay?docid=6413987104216231786&q=blind+watchmaker&total=117&start=0&num=10&so=0&type=search&plindex=1#
>
> Start around 36:00.


Okay I watched it. But that is already the structure of the internet, so how does that argue against the economics of the centralized server?

Rather I think it is the point at the 34 - 36 min portion that says the more important improvement of many independent actors is that it can anneal to complex dynamics. I say that is the reason centralized anything fails, is because it can not adapt fast, due to limitation of mutations (99% of internet is not able to cross-connect).

And fast adaption (30 generations to solve 350 million possibilities due to a large population of mutations on each generation) is what makes evolution so fast.

What you think?




>> Okay I watched it. But that is already the structure of the internet,
>> so
>> how does that argue against the economics of the centralized server?
>
>
> I guess it doesn't.
>
> One argument against a centralized server is the cost burden. The
> users don't share the burden.


Yes conflation (socialism) is mis-allocation.


>
> If you have an infrastructure that supports sharing of the burden
> across users, the barrier to entry is nil. With the bandwidth skewed
> as it is towards the consumer, not the producer, you need a google
> with dedicated servers to get in the game. It's a recurring theme that
> once a site takes off it is killed by its own success -- server failures,
> inability to serve pages, denial of service... what works for 1000
> users can break horribly for 20,000.


Small things grow faster, because (1856 2nd Law thermo, universe is closed system by definition) nature wants to trend to maximum disorder, so mass can not constantly accumulate exponentially.


>
> The key about fast progress is making it easy to innovate and
> introduce new concepts.


Ditto above and below...


> My network concept is itself just
> infrastructure, on top of which an unlimited number of new
> services could be delivered. The current internet is too subject
> to political whim.
>
> Rather I think it is the point at the 34 - 36 min portion that says the
>> more important improvement of many independent actors is that it can
>> anneal to complex dynamics. I say that is the reason centralized
>> anything
>> fails, is because it can not adapt fast.
>>
>> And fast adaption (30 generations to solve 350 million possibilities due
>> to a large population of mutations on each generation) is what makes
>> evolution so fast.
>>
>> What you think?
>
>
> Evolution is fast because it isn't a random walk. Only steps that are
> along the gradient are kept. The 350 million possibilities represent
> points in some N dimensional space. Distance in this space is always
> very small. If there is a fitness quantity associated with each of the
> 350 million possibilities, it is trivial to zoom in on a local maximum.


You just restated what I wrote above.

The fact that each generative step has millions of candidates, means that the system anneals rapidly. It is a gradient search (Newton's method), but stochastically. The stochastic part is very important, because a single actor gradient search can get stuck in a local minima.



>> Build it and they will use it and the centralized inertia will get overrun by
>> humanity.


The solution is we need to make P2P more popular and mainstream. Then the
govt can never block it any more. Game theory! That is why we have to
build a P2P programming tool and build it into the browser and then let
the web programmers go mad with it. Then the rest is history.

Read this please for popular use cases:

http://www.ietf.org/mail-archive/web/hybi/current/msg03548.html
http://www.ietf.org/mail-archive/web/hybi/current/msg03549.html
http://www.ietf.org/mail-archive/web/hybi/current/msg03231.html

Sure nature due to Coase's Theorem will eventually route around the
centralized inertia, but it is the person who facilitates the rate of change that earns the brownies.



> Anyway the current internet stinks for any sort of virtual
> mesh network. It is heavily skewed on the downstream
> side. I can get 500K bytes/second down easily but it bogs down
> if I'm even uploading 30K bytes/second.

So the initial applications will evolve to that cost structure, but it
doesn't mean the mesh is useless.

and once the mesh is there, it will force the providers to balance out the
bandwidth.

The problem is their current business models will be toasted???


> I believe an ideal
> internet connection would be 2 units upload capacity for
> every 1 unit of download capacity. That would allow any
> node to source data to 2 children. Each of those could
> retransmit the data to 2 more children. This binary tree
> could be extended forever. Any node could be as powerful
> as the most powerful servers in the world today, as the
> receivers of the data expand the data delivery capacity.


The current limit is only bandwidth. Bandwidth isn't the most important
problem. The paradigm shift of just being able to make that structure
with limited bandwidth is huge.

We can upload to 2 connections now.

>
> This choice reflects either the consumer nature of the
> public (most people want to receive data, not source it),
> or an intentional effort to push people into being consumers,
> not producers.


Neither. It is the current design of the WWW. We simply need to change
it and the economics will overrun any one who tries to stop it.


> My network demands symmetric bandwidth
> capacity, equal up and down.

Build it and they will use it and the inertia will get overrun by humanity.



Furthermore, those humans are programmers and can inject code, these have
a life of their own...etc...

> Interesting. How do you get it started programming itself then?
> A machine with infinite computing power magically programs
> itself? How?
>
>
>> It will program itself, that was precisely my point.





> There is a reality out there that is solid and real and it doesn't depend
> on my consciousness or any other for its reality.
>
> I go into the forest near my house. I find on the ground layer upon layer
> of leaves. The ones at the bottom are most decayed, on top they're
> newer, just from last fall. Each leaf had a history. It all was there even
> if I nor anyone else came to look at it. The universe has a history, the
> deeper you dig the more details you uncover. It didn't all just come into
> reality when I or anyone looked for it. The only way for the sheer volume
> of detail to have gotten there was for it to have existed and had its own
> history.


Yes but you didn't know it.

And you will never know it all.

But mesh networking will increase your knowledge capacity by several
orders-of-magnitude.

Sorry for the slow reply, I am programming simultaneously.

below...

>
> I'm limited by my location and my own senses. I didn't perceive your
> sunrise or your cockroach. I perceived on my computer screen words
> that supposedly originated from someone on the other side of the
> planet describing an experience he had moments before. A vast number
> of such experiences are going on right now all over the planet, yet I'm
> not connected to them so I don't perceive them directly. They exist
> independent of my contribution as observer.


Don't argue against virtual reality. You argued for it before. You said
we could tap into brain and it would just as real as reality.



>
> If a tree falls in the forest and there is no one there to hear it, does
> it make a noise? Yes, sound waves are produced. If a human were
> hearing it he'd perceive it as humans do. If a human isn't there there
> is no human perception of it. But the sound waves occur regardless.
> Wind is produced, other trees are rocked around a bit. Animals perceive
> the noise in their own way. It all happens whether or not humans are
> around.
>
> Throughout the universe all sorts of definite things are going on right
> now, utterly unperceived by conscious intelligence. Yet they're real.
> The universe doesn't depend on human consciousness. Not one iota.






> I'm not sure what you're referring to by mesh. I picture a mesh as a
> 2D array that has been optimally triangulated. Each node connects
> to neighbors nearby, not to distant ones.


In a virtual network, nothing is distant. That is the key point.

You know from your own career that changing the paradigm (software) is
more efficient than changing the hardware.


> Each connection is the same
> capacity in terms of bandwidth. There are inherent problems with this
> if there are centralized servers everyone wants to contact -- all the
> intervening nodes get saturated just passing packets around.


No in fact trunk+branch is more resource efficient. Go study the science
on this please.

Do not tell me we have infinite resources. Yes we do, but economics still
matters, because if you violate economics, you have violated physics.


>
> If information is dumped into the mesh as a whole and spread around
> redundantly, the bandwidth is not wasted.

Exactly! That is why a virtual network is not distant! You know that
cache proxies are more numerous than servers on the internet.

You are getting it.

>
> The internet I believe today is more like the circulatory system, or
> the branches of a tree.


Exactly. Aka, "Hub and Spoke".


> Leonardo Da Vinci recognized that if you
> measure the circumference of a tree where it splits into two limbs,
> the parent limb is always the sum of the two children. This makes
> perfect sense. Ditto for the veins and arteries, they keep bifurcating
> into smaller and smaller vessels, yet the cross section of all the
> children at any node matches the cross section of the parent.


Yes you can't violate the laws of physics with your machine learning
econoimics. That has been your mistake thus far.

>
> I believe the human circulatory system takes up only 3% of the
> mass of the body -- it's incredibly efficient


YES!!!!!!!!!!!!!! EFFICIENCY!!!!!!!!!!!!!!!!











It is not critical that they be physically meshable, in fact that is not
economically efficient (refer to Coase's Theorem).

Virtual mesh (IPv4) network is more efficient, because the economics of
hub and spoke is well proven to be a lower cost structure.

Coase's Theorem is that nature will always route around any artificial
opportunity cost barrier.

IPv4 allowed for mesh networking and did not allow for the NATs we have
today. That was a corruption.


> Hope you read all my emails. Realize there are already infinite
realities out there right now. We just need to tap them. The resource
is waiting for us. We don't have to create it.
>
> We already have the pipes to it, it is called the internet.
>
> We have the computers at the end of the pipes.
>
> The problem is that those infinite realities can't be unleashed, because
the power structures are preventing those computers at the ends of the
pipes from cross connecting without going through a central server. The
server farms are the bottleneck that stifles those infinite realities
from interacting and creating the sort of automated intelligence and
results you are envisioning.
>
>
>
>> Sure, I'm curious.
>>
>>> Machine Intelligence is always bubbling in the back of my mind.
>>> > Steady progress.
>>>
>>>
>>>
>>> I know how to get what you want. And faster.
>>>
>>> Want me to explain?

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Dynamic typing is destroying composability and evolution

Post  Shelby on Mon Aug 30, 2010 5:52 pm

http://esr.ibiblio.org/?p=2491#comment-276662

Type inference errors should be better programmed to illustrate the solution tree in code.

Untyped languages can not be referentially transparent, which is required for massive scale composability and parallelization (think multi-core and virtual mesh networking).

Adaption (evolution) to dynamic change requires the maximum population of mutations per generation, i.e. independent actors.

Copute the dots.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Complexity

Post  Shelby on Sat Sep 04, 2010 3:47 pm


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Free market law can defeat politics

Post  Shelby on Mon Sep 13, 2010 5:10 am


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Topology of networks are driven by the constraints to the free market

Post  Shelby on Wed Sep 15, 2010 11:49 am

Originally written 04-14-2008, 05:46 AM:
shelby wrote:It is fascinating to think of networks this way. The neurons in the brain, people in society, cities on the interstate highways, telephones in the telephone network, computers connected to the internet, hyper-links within web pages, are all interacting nodes of a massively parallel network.

Then it is interesting how the constraints effect (cause) the topology of the network, and then further contemplate how constraints can be destroyed, so as to exponentially increase the information content of society. I think such destruction mechanisms are often referred to as disruptive technology.

For example, think of how distance cost affects road networks, which is also the topology of the internet. You end up with clusters of massively parallel local connections, and then multiple large backbones connecting these.

Note how the internet has broken down the constraint of distance between people. Note how the wikipedia concept of editable web pages (which is what myspace is on a social collaboration level) has broken down the constraint of time and place to collaboration.

This theory is so powerful, it tells me precisely how to focus my work in order to become exponentially more effective on destroying the topological constraints to maximizing the information content of society (and thus on my wealth and knowledge and prosperity).

dash, I think you need to think more of your machine intelligience in terms of breaking down constraints in the potential network of real people, so that they can experience more interactions, without the material constraints.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Link to my economic optimization suggestion for Bittorrent.org

Post  Shelby on Fri Sep 17, 2010 4:19 pm

http://forum.bittorrent.org/viewtopic.php?id=28

Here is the guy "Dave" that I was discussing with:

Computers: - Page 3 Img0310

David Harrison, Ph.D.: CTO, Co-Founder
David Harrison is Chief Technology Officer and co-founder. Prior to Flingo, he was the founder of BitTorrent.org and invented BitTorrent's Streaming protocol. David Harrison previously held a post-doctoral position in the Video and Image Processing Lab in the Electrical Engineering and Computer Sciences Department at UC Berkeley. He received a Ph.D. in Computer Science from Rensselaer Polytechnic Institute.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Facebook creator/CEO allegedly hacking using Facebook login data

Post  Shelby on Sat Sep 18, 2010 7:09 pm

http://www.thedailybeast.com/blogs-and-stories/2010-09-08/mark-zuckerberg-at-harvard-the-truth-behind-the-social-network/3/

In March 2010, the website Business Insider ran a story—surprisingly under-circulated—in which sources (Business Insider got access to instant messages and emails, and conducted “more than a dozen” interviews) claim that Mark Zuckerberg, anxious about an upcoming article on the ConnectU scandal, hacked into the email accounts of two Crimson reporters, using login data he found by applying failed thefacebook.com passwords to Harvard email accounts. Later that summer, Business Insider sources show him hacking into ConnectU founders’ email addresses, forming fake Facebook profiles, and tinkering with the ConnectU site.

Business Insider also has an instant-message exchange that supposedly took place between Zuckerberg and a friend in February 2004, in which Mark boasts about all the private information he’s gleaned as Facebook czar, and calls the Harvard students that trust him “dumb fucks.”

Seems to me these illegal acts could plausibly be used by the CIA to blackmail him into letting them harvest user data:

http://www.google.com/search?q=facebook+CIA

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Newton's Method and a much more efficient one

Post  Guest on Sat Sep 18, 2010 8:46 pm

Recently I was introduced to an improvement for finding roots that converges much faster than Newton's method.

Recall Newton's method for finding a square root (for example) is you start with a guess. Then you divide the original number by your guess to produce a quotient. You then average your guess with this quotient to produce a new guess. Do this repeatedly and your answer converges quickly. It turns out each step gives you one more bit of the answer, which is pretty good.

The improved method actually doubles the number of computed bits at each iteration.

Suppose you're trying to find the root of a function f(x). You pick a guess X0 and compute the value of the function. You then pick an X1 near X0 and compute the value of the function. With these 4 values you can compute the slope of the function near X0 and X1. You then compute an X2 that is your guess as to where the function will cross the X axis (Y=0). It is just an extension of the line of the slope between the two points at (X0, f(X0)) and (X1, f(X1)).

Newton's method is equivalent to finding X0 and X1 such that f(X0) and f(X1) differ in sign. So one can be certain there is a crossing of the X axis betwen them. So one computes X2 = (X0 + X1) / 2.0 and the function is evaluated. This new X2 just replaces whichever of f(X0) or f(X1) has the same sign as f(X2).

Don't know if this is relevant, I just wanted to share.

Guest
Guest


Back to top Go down

Computers: - Page 3 Empty Algorithms

Post  Shelby on Sat Sep 18, 2010 9:42 pm

Well I wish I could link to a Numerical Algorithms book online, (I had a red book but don't know what happened to it) do you know any? Obviously the Knuth books are great.

Here are some Google Knol links:

http://knol.google.com/k/knol/Search?q=incategory:mathematics
http://knol.google.com/k/differential-binary-trees
http://knol.google.com/k/pq-trees-and-the-consecutive-ones-property

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Re: Computers:

Post  Guest on Sat Sep 18, 2010 10:02 pm

Shelby wrote:Obviously the Knuth books are great.

Ah yes, obviously.

I loath Knuth and his style and his books and the ooh's and ahs that everyone pours over him. He is highly overrated. His focus is on the analysis of the efficiency of an algorithm, as opposed to the real problem which is the creative act of devising the algorithm in the first place.

He presumes a person will devote the time (and have the time to devote) for a rigourous mathematical analysis for every algorithm one considers using. What poppycock! Analysis only works for the most trivial programs and algorithms. Want to know how efficient an algorithm is? Code it up and run some timing tests. Want to know which approach is better? Implement both and compare how fast they run. No need to invest time in analyzing them. Just measure them.

My god, don't get me started. Knuth is a jackass. Ivory tower crackpot. And the fact that Bill Gates loves him lowers his reputation even more (in my eyes).

Guest
Guest


Back to top Go down

Computers: - Page 3 Empty Huffman Encoding

Post  Guest on Sat Sep 18, 2010 10:12 pm

Shelby another thing you need to be aware of, it relates to bifurcation of trees. You need to understand how Huffman encoding works.

Ideally with Huffman encoding you have a binary tree and you decide left or right by your choice of 0 or 1 bit. Messages have a variable number of bits. The intent is for the general message to minimize the number of bits required. As an example since the letter 'e' is much more common than the letter 'q' you would want a word length for 'e' to be much shorter than the word length for 'q'.

Huffman devised a scheme that is guaranteed to create the most efficient tree. Suppose you have a list of weights for various "words" you want to encode. What you do is sort them from lowst to highest. Then you realize the two smallest ones will be siblings, and you create a node that has those as its children, and its weight is the sum of its children. The two children are now replaced in the list by the parent, and you've just gotten rid of one word you need to deal with. Do this until you're down to one word and that's the root node of the perfectly weighted tree.

Example:
3 4 5 6 12 are your weights
3 and 4 become siblings of a node with weight 7, and we resort:
5 6 7 12
5 and 6 become siblings of a node with weight 11, and we resort
7 11 12
7 and 11 become siblings of a node with weight 18, resort...
12 18
12 and 18 become siblings of a node with weight 30 and we're done.
So you're left with this encoding, 0 = left branch, 1 = right branch
1 = 12
011 = 6
010 = 5
001 = 4
000 = 3
Computers: - Page 3 Tree11

Guest
Guest


Back to top Go down

Computers: - Page 3 Empty I wrote a huffman encoder for JPEG in 2008

Post  Shelby on Sun Sep 19, 2010 3:11 am

I wrote a huffman encoder in HaXe (Flash, etc) for JPEG in 2008

Code:
/* by Shelby H. Moore III, Created: Nov. 2008, Last Updated: Nov. 2008
   I make no claims nor warrents whatsoever. Please retain this header with my name in all copies.
   Please kindly credit me in any derivative works.
*/
package huffman;
import Assert;
import huffman.RWbits;

// See tutorial at end of this file
class LeafCode
{
   public var bitcount   (default, null) : Int;
   public var code      (default, null) : Int;
   public function new( _bitcount : Int, _code : Int )
   {
      bitcount = _bitcount;
      code = _code;
   }
}


// value of the leaf (decoded from the LeafCode)
typedef LeafValue = Value;


class Tree
{
   private var leaves   : Array<UInt>;
   private var values   : Array<Int>;
   private var map      : Array<LeafCode>;   // map[LeafValue.value] = LeafCode, map.length == 0 if not initialized
   private var invmap   : Array<Array<Null<LeafValue>>>; // map[LeafCode.bitcount][LeafCode.code] = LeafValue, map.length == 0 if not initialized
   private var rwbits   : RWbits;

   // leaves[i] = number of Huffman sub-trees of length i bits, starting from root that end at LeafCode (value not node)
   // values = values of leaves in the order that corresponds to for( i in 0...leaves.length ) leaves[i];
   public function new( _leaves : Array<UInt>, _values : Array<Int>, _rwbits : RWbits )
   {
      leaves = _leaves.copy();
      values = _values.copy();
      // Remove useless 0 elements at end
      while( leaves[leaves.length - 1] == 0 )
         leaves.pop();

      map = new Array<LeafCode>();
      invmap = new Array<Array<Null<LeafValue>>>();
      InitMaps();

      rwbits = _rwbits;
      Assert.IsTrue( rwbits.rwcount >= leaves.length );
   }


   // Algorithm presented on page 50 of http://www.w3.org/Graphics/JPEG/itu-t81.pdf (CCITT Recommendation T.81)
   /* To visualize, leaves = [0, 0, 1, 5, 1, 1, 1, 1, 1, 1] for following LeafCodes:
         00
         010
         011
         100
         101
         110
         1110
         11110
         111110
         1111110
         11111110
         111111110
   */
   public function InitMaps()
   {
      if( map.length != 0 && invmap.length != 0 ) return; // Already done?
      var code = 0;
      var n = 0;
      var start = 1; // skip leaves[0] because ***footnote below
      var max = start + 1;
      for ( i in start...leaves.length )
      {
         invmap[i] = new Array<Null<LeafValue>>();
         Assert.IsTrue( leaves[i] <= max ); // leaves[i] can't have more values than can fit in i bits
         for ( j in 0...leaves[i] )
         {
            var t = code /* we mask here instead of in reading and writing of bits*/& (max - 1);
            map[ values[n] ] = new LeafCode( i, t );
            invmap[i][t] = new LeafValue( i, values[n] );
            n++;
            code++;
         }
         code += code; // code <<= 1, code *= 2
         max += max; // max <<= 1, max *= 2
      }
   }
   // ***always node (not a leaf) at root (top) of huffman tree, see tutorial at bottom of this file


   // Returned LeafCode.num_bits <= leaves.length
   public inline function Encode( value : UInt )
      : LeafCode
   {
      return map[value];
   }


   // Inputs bitcount significant bits (or less for last bits of encoded stream), where LeafCode is in the most significant bits
   // If bitcount < leaves.length, will not match LeafCode.bitcount > bitcount
   // Returns null if input matches no LeafCode
   public function Decode( bits : Int, bitcount : Int )
      : Null<LeafValue>
   {
#if table_walk_method_to_Tree_Decode
      var code = 0;
      var n = 0;
      //var start = 1; // skip leaves[0] because ***footnote above
      var max = 2; // start + 1
      var mask = bitcount; // bitcount + 1 - start
      for ( i in /*start*/1...bitcount )
      {
         mask--;
         var t = leaves[i];
         if( t != 0 )
         {
            n += t;
            code += t;
            var candidate = bits >>> /*bitcount - i*/mask;
            if( candidate < (code & (max - 1)) )
            {
               var copy = code;
               for( j in 1...t+1 )
                  if( candidate == (--copy & (max - 1)) )
                     return new LeafValue( i, values[n-j] );
            }
         }
         code += code; // code <<= 1, code *= 2
         max += max; // max <<= 1, max *= 2
      }
#else
//return new LeafValue( 32, bits );
      //var start = 1; // skip leaves[0] because ***footnote above
      var mask = bitcount; // bitcount + 1 - start
      for ( i in /*start*/1...bitcount+1 )
      {
         mask--;
         var leaf = invmap[i][bits >>> /*bitcount - i*/mask];
         if( leaf != null )
            return leaf;
      }
#end
      return null;
   }


   public inline function WriteEncoded( value : Int )
   {
      var leaf = Encode( value ); // make sure Encode() isn't called twice if Write() is inline and not optimized
      Write( leaf );
   }


   // Input must be w.bitcount <= rwbits.rwcount, which is assured if w = Tree.Encoded()
   public inline function Write( w : LeafCode )
   {
      rwbits.Write( w.bitcount, w.code );
   }


   public inline function ReadDecoded()
      : Int
   {
      return rwbits.ReadDecoded( Decode );
   }


   // Same restriction on input value, as for Tree.Write()
   public inline function Read( bitcount : Int )
      : Int
   {
      return rwbits.Read( bitcount );
   }
}

/* http://www.siggraph.org/education/materials/HyperGraph/video/mpeg/mpegfaq/huffman_tutorial.html
A quick tutorial on generating a huffman tree
Lets say you have a set of numbers and their frequency of use and want to create a huffman encoding for them:

  FREQUENCY       VALUE
  ---------      -----
       5            1
        7            2
      10            3
      15            4
      20            5
      45            6

Creating a huffman tree is simple. Sort this list by frequency and make the two-lowest elements into leaves,
creating a parent node with a frequency that is the sum of the two lower element's frequencies:

       12:*  <--- node
       /  \
    5:1  7:2  <--- leaves

The two elements are removed from the list and the new parent node, with frequency 12, is inserted into the list by frequency.
So now the list, sorted by frequency, is:

      10:3
      12:*  <--- inserted tree node
      15:4
      20:5
      45:6

You then repeat the loop, combining the two lowest elements. This results in:

       22:*
       /  \
  10:3  12:*
   /      \
 5:1      7:2

and the list is now:

      15:4
      20:5
      22:*
      45:6

You repeat until there is only one element left in the list.

       35:*
       /  \
  15:4  20:5

      22:*
      35:*
      45:6

             57:*
         ___/    \___
      /            \
    22:*          35:*
    /  \          /  \
 10:3  12:*    15:4  20:5
        /  \
      5:1  7:2

      45:6
      57:*

                                  102:*
                __________________/    \__
              /                          \
             57:*                        45:6
         ___/    \___
      /            \
    22:*          35:*
    /  \          /  \
 10:3  12:*    15:4  20:5
        /  \
      5:1  7:2

Now the list is just one element containing 102:*, you are done.

This element becomes the root of your binary huffman tree. To generate a huffman code you traverse the tree to the value you want,
outputing a 0 every time you take a lefthand branch, and a 1 every time you take a righthand branch.
(normally you traverse the tree backwards from the code you want and build the binary huffman encoding string backwards as well,
since the first bit must start from the top).

Example: The encoding for the value 4 (15:4) is 010. The encoding for the value 6 (45:6) is 1

Decoding a huffman encoding is just as easy : as you read bits in from your input stream you traverse the tree beginning at the root,
taking the left hand path if you read a 0 and the right hand path if you read a 1. When you hit a leaf, you have found the code.
*/

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Re: Computers:

Post  Guest on Sun Sep 19, 2010 3:46 am

Shelby wrote:I wrote a huffman encoder in HaXe (Flash, etc) for JPEG in 2008

Finally you may have actually contributed something that is useful to me. I've never heard of HaXe. I've often wanted to author flash files but want to use open source tools using traditional unix-style Makefile, text editing, etc. (as opposed to point and click user interfaces). Maybe this HaXe is the ticket.

Thanks!

Guest
Guest


Back to top Go down

Computers: - Page 3 Empty Static binary trees, goertzal algorithm

Post  Guest on Sun Sep 19, 2010 4:12 am

Shelby wrote:I wrote a huffman encoder

While we're on the subject of trees, there is a binary tree approach where the children of node N are always located at slot 2N+1 and 2N+2, and the root node is at N=0. That way if you have a static binary tree you can just hardcode it. This is used in truetype fonts (IIRC). It's an interesting puzzle how to organize your data that way if you start with an ordered list of elements.

Changing the subject...

The goertzal algorithm is very interesting. If you're interested in just a few key frequencies it is a trivial way of testing for power levels. It's used for touch tone phone recognition. I spent some time and analyzed the algorithm to figure out how it worked.

Given two samples of a signal's history differing by a constant time
delta, one can compute the next goertzel sample as follows:
a = current sample, b = previous one, c = next one

a = b = 0
LOOP:
c = 2*b*cos(theta) - a + x // x is a new sample every iteration
a=b
b=c
goto LOOP

where theta is the frequency of interest and x is a sample from
some signal source.

Guest
Guest


Back to top Go down

Computers: - Page 3 Empty re: HaXe and Copute

Post  Shelby on Sun Sep 19, 2010 8:24 am

dash wrote:
Shelby wrote:I wrote a huffman encoder in HaXe (Flash, etc) for JPEG in 2008

Finally you may have actually contributed something that is useful to me. I've never heard of HaXe. I've often wanted to author flash files but want to use open source tools using traditional unix-style Makefile, text editing, etc. (as opposed to point and click user interfaces). Maybe this HaXe is the ticket.

Thanks!

Thanks for the tip on goertzal algorithm, note that Copute aims to be the holy grail upgrade from HaXe, and I had initially planned to compile to HaXe, so therefor Flash, PHP, C++, JavaScript, Nevo VM, and (soon) Java would all be targets.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Re: Computers:

Post  Guest on Sun Sep 19, 2010 1:21 pm

Shelby wrote:, note that Copute aims to be the holy grail upgrade from HaXe, and I had initially planned to compile to HaXe, so therefor Flash, PHP, C++, JavaScript, Nevo VM, and (soon) Java would all be targets.

June 2008 I wrote a BASIC interpreter with graphics and sound additions, hoping to get my kids interested in programming.

http://www.linuxmotors.com/SDL_basic/

The whole project took under 2 weeks. Originally I was parsing the basic program manually, the way BASIC used to work when interpreted. I finished the thing, got it working completely, when the idea of separating out the parsing from the execution would be a way of boosting performance. I wanted to have to only pay the price of parsing syntax once.

So I reimplemented the entire interpreter. I made a Bison (Yacc) grammar and had the parser output virtual machine code for a stack based virtual machine which I also implemented. That way if you had a syntax error you could know immediately at run time -- say if a GOTO went to a line that didn't exist. With the traditional approach one would have to exercise the entire program to even know if there was a stupid typing error. I was very happy with the end result, it was much faster than my original version.

Later I had learned about a few open source attempts to create flash content, and they even released the entire spec on the flash file format, including their virtual machine. It had occured to me to modify the basic to output virtual code for the flash interpreter, but I never did it.

I'm sure you must be familiar with Bison. You'd want to use that approach if you want to build your Copute compiler.

Computers: - Page 3 Basic

A note on size for the basic:
vmachine.c = 1220 lines of C code
basic.c = 404 lines
grammar.y = 1267 lines
Everything else is trivially small...

ETA: I want to relate a story. A friend of mine criticized my choice of BASIC as a language to introduce to my kids, seeing as how there are better, cleaner languages that are more up to date (python for example, maybe ruby). And he said the GOTO statement was bad, and he quoted Dijkstra's long opposition to its use. I told my friend, who is in his 60's, that it's strange that in order to make his point he has to bring up Dijkstra, considering that my friend has been programming longer than Dijkstra had been when Dijkstra formed his opinion about GOTO being bad. I told my friend that he was fully qualified to have his own opinion about GOTO, and I in fact valued his opinion more than whatever this fellow Dijksta's opinion was.

At what point do we realize we're just as qualified as the "experts" to decide what is right and wrong? Evidently for some of us we never do.

I learned how to program with BASIC first. I was able to "overcome" the bad practices that BASIC leads to. And having that experience I was able to realize the value of the improvements that came after BASIC. Why deny my kids that evolutionary history? Moreover, modern structured languages are so nitpicky on syntax that it takes a lot of fun out of it. BASIC is quite forgiving that way, the syntax is easy to get right. Compared to 'C' where it is SO easy to get it wrong. I was trying to lower the barrier to entry into programming in the first place. Even a bad programmer has a vast advantage over the nonprogrammer...

Dijkstra is an interesting fellow. I like him MUCH more than Knuth.
http://en.wikipedia.org/wiki/Edsger_W._Dijkstra

Guest
Guest


Back to top Go down

Computers: - Page 3 Empty Howdy Dash...

Post  SRSrocco on Sun Sep 19, 2010 6:31 pm

CAN YOU SEE ME?

Computers: - Page 3 Frog_w14

DASH...good to see you are still alive. Actually I have missed you and your debates. How is everything going? Looks like you are still doing well programming. Anyhow....wish you might stop in the SILVERSTOCKFORUM and say a few words once in a while.

best regards,

steve


SRSrocco

Posts : 22
Join date : 2008-11-02

Back to top Go down

Computers: - Page 3 Empty Re: Computers:

Post  Guest on Sun Sep 19, 2010 11:19 pm

SRSrocco wrote:DASH...good to see you are still alive.

Thanks! A lot of my memories of those old conversations have been recycled though... I'm drawing a blank as to details, although I do recall the handle "SRSrocco".

Guest
Guest


Back to top Go down

Computers: - Page 3 Empty Re: Computers:

Post  Shelby on Mon Sep 20, 2010 4:30 am

dash wrote:
Shelby wrote:, note that Copute aims to be the holy grail upgrade from HaXe, and I had initially planned to compile to HaXe, so therefor Flash, PHP, C++, JavaScript, Nevo VM, and (soon) Java would all be targets.

June 2008 I wrote a BASIC interpreter...

So I reimplemented the entire interpreter. I made a Bison (Yacc) grammar and had the parser output virtual machine code for a stack based virtual machine which I also implemented...

I'm sure you must be familiar with Bison. You'd want to use that approach if you want to build your Copute compiler.

If you went to the Copute.com site, you would see I already implemented my own custom grammar and parser generator in JavaScript.

dash wrote:I learned how to program with BASIC first. I was able to "overcome" the bad practices that BASIC leads to. And having that experience I was able to realize the value of the improvements that came after BASIC. Why deny my kids that evolutionary history?...

Agreed, but after they learn it, teach them to drop GOTO.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Technology will drive value of mines toward 0 (Zero)

Post  Shelby on Wed Sep 22, 2010 1:14 pm

I am not talking about tomorrow, but this New York Times article from 1897, supports my hypothesis:

http://query.nytimes.com/mem/archive-free/pdf?res=F00E16FD3D5414728DDDAB0894DD405B8785F0D3

It says the ants mine and selectively choose particles.

I had the thought, perhaps it was 2007, that nanotechnology is going to destroy the value of existing mines.

Because we will soon build little bots, smaller than ants, which will mine by munching on the rock and sorting the various minerals, at the particulate level. In other words, the mill will be decentralized into the earth. No chemical post-processing will be necessary. These nanobots will build piles of sort minerals.

Thus the huge capitalization of mines won't make any sense, as the cost of minerals will plummet to near 0.

Nanotech minature bots which might reduce the cost (and energy cost) of mining gold by orders-of-magnitude enabling lower grades to be extracted at accelerated rates, thus destroying the stocks-to-flows principle.

My ant as external brain neuron research paper applies:

https://goldwetrust.forumotion.com/knowledge-f9/book-ultimate-truth-chapter-6-math-proves-go-forth-multiply-t159-15.htm#3640

==========
UPDATE:
Shelby wrote:...Nanotech minature bots which might reduce the cost (and energy cost) of mining gold by orders-of-magnitude enabling lower grades to be extracted at accelerated rates, thus destroying the stocks-to-flows principle.

Actually that would only be temporary. A new higher level of stocks would accumulate, and flows would eventually stabilize as a shrinking % of stocks at the higher rate.

Also production of everything in economy would increase too, so gold would retain its relative value, probably increase in value.

Note the depreciation of the value of the mines would be relative to former relative FIAT value (but everything would be getting cheaper in fiat, but remember stocks are leveraged to dividends). After they stabilize, then they would be investments again.

At that point I'd be invested in nano-bots. I'm guessing this could take awhile and won't catch anyone by surprise.

Agreed. What I think could catch you by surprise perhaps is capital controls and effectively confiscation of your brokerage account. At some point the western nations have to go after the remaining capital in the system.

It may come from an international ruling such as Basel.

But even more likely is another (maybe many more) round(s) of massive selling of stocks in a panic, especially if the metals are sold off too.

Also fraud and deceit will radically accelerate (because westerners have nothing to lose any more, the veneer of "Leave it to Beaver" is entirely gone, people will become cutthroat and callous).

Also the new economy will detach and you think you are doing well with +30% gains per year in your networth, but in reality you will be falling behind rapidly. So the nanotech is not coming just from one direction, it is coming at you from everything direction. Example look at what I am working on. Things like that will happen under your nose and you won't see it until you wake and realize your fiat just isn't worth anything in the new economy.

You come to me and you offer me $billion and I say sorry, I can't do anything with that, I have more cash than I can find suitable experts to hire. I will say I need brains, not cash.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Information Age economy

Post  Shelby on Wed Sep 22, 2010 9:31 pm

Read also the prior post above.

In case any one didn't understand, my point is that we are moving towards an information economy age, out of the industrial age. We can tell that the industrial age is dying, because China is driving the profit margins negative. Automation is the only way to do manufacturing profitably. The race is on.

The problem is that in an information economy, capital is nearly all knowledge based. And really smart people are not motivated to be bought by an investor, they are motivated to invest their time and get a % of the company.

So it will take relatively smaller amounts of traditional money to form these new companies. This already the case, Facebook was formed with $200,000.

Mostly this is due to computers. The physical sciences are being reduced to digital programming, e.g. biotech, nanotech, etc are mostly computer science (I know because I've look at the way they do their research).

Actually what is happening is the knowledge is becoming wealth.

The knowledge holders will take the capital of the capitalists, simply by ignoring them and capturing the market. The capitalists return on capital will not keep up, so their purchasing power will fade away.

The counter argument that can be made is that knowledge isn't fungible so we will still need gold as store-of-value. But that misses the point. Gold earns no income, the knowledge holders will be taking away the income sources which will be more knowledged based.

Also because the intractable problems facing the world at this time where the capitalists are trying to grab a monopoly, the knowledge holders are accelerating disruptive technology. We will see an explosion of technology in the next 10-20 years that will cause more change than in the entire history of mankind.

The Mathusians will be wrong again about technology, just as they have been every century at the important juncture.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Micro kernal issues

Post  Shelby on Sun Sep 26, 2010 4:10 am


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Learned more about Max OS X and micro-kernel issues

Post  Shelby on Sun Sep 26, 2010 5:55 pm

Okay I have been doing a fair amount of reading about the issues of debate between Linus Torvalds (Linux creator) and the proponents of micro-kernels and the multiple failures thus far of GNU Hurd. I am reasonably well qualified to understand these sort of concurrency issues, because of the work I did already on Copute.

Mac OS X uses the Mach micro-kernel in conjunction with portions BSD unix, but these OS services are running as a kernel process, not a user process, and Mach is not very micro-kernel any way. So the micro-kernel aspect of Max OS X is really just the modularization of some of the core kernel, but no isolation. Thus there is no architecture in Mac OS X for the kind of security that would allow rogue applications to run without harming the rest of the system. The iPhone iOS derives from the same Darwin lineage as Mac OS X.

===============================

One of the big stumbling blocks (since at least 2002, if not 1992) appears to be that Unix is based around file/stream handles for accessing resources and devices. The problem is how to pass around these handles securely between user mode processes:

http://www.coyotos.org/docs/misc/linus-rebuttal.html
http://www.opensubscriber.com/message/l4-hurd@gnu.org/12608235.html
http://lists.gnu.org/archive/html/l4-hurd/2002-12/msg00003.html
http://www.coyotos.org/docs/ukernel/spec.html#frontmatter-2.2

Apparently the lack of asynchronous form of inter-process procedure call (IPC) in the micro-kernel complicates or reduces the performance of any solution in their view. Also the inability of the micro-kernel to enforce capability meta-data on shared resources (and the IPC call itself) is claimed to be a security hole and hindrance to solve in the user process layer. I do agree that the receiver of IPC should be protected against rate of DDoS, until it approves the receipt of future IPC from a process.

The fundamental issue is the one I am trying to solve with Copute, which is why I guess I bother to write about it. And that is the issue that sharing a resource is security hole for the same reason that it kills inter-function/process composability scaling, i.e. it removes referential transparency. Linus is correct where he says that the solution ultimately has to come from the language layer. Apparently Microsoft Research is aware of this with their work on Singularity since 2003. Jonathan Shapiro (PhD) formerly of Coyotes, BitC, and EROS which was working to improve Hurd, joined Microsoft Research in 2009 to work on embedded derivatives of Singularity. However, afaics Singularity attacks the issue of trust, but doesn't address the issue of referential transparency, wherein afaics trust is not the issue that needs to be solved. They admit this:

Second, Singularity is built on and offers a new model for safely extending a system or
application’s functionality. In this model, extensions cannot access their parent’s code or data
structures, but instead are self-contained programs that run independently. This approach
increases the complexity of writing an extension, as the parent program’s developer must define a
proper interface that does not rely on shared data structures and an extension’s developer must
program to this interface and possibly re-implement functionality available in the parent.
Nevertheless, the widespread problems inherent in dynamic code loading argue for alternatives
that increase the isolation between an extension and its parent. Singularity’s mechanism works for
applications as well as system code; does not depend on the semantics of an API, unlike domain14
specific approaches such as Nooks [49]; and provides simple semantic guarantees that can be
understood by programmers and used by tools.

The principal arguments against Singularity’s extension model center on the difficulty of
writing message-passing code. We hope that better programming models and languages will
make programs of this type easier to write, verify, and modify. Advances in this area would be
generally beneficial, since message-passing communication is fundamental and unavoidable in
distributed computing and web services
. As message passing becomes increasingly familiar and
techniques improve, objections to programming this way within a system are likely to become
less common.

Fundamentally, if the processes on the computer share state, then they can not be secure, nor does robustness and reliability scale. Linus was correct that for the current state of programming, the micro-kernel gains nothing because one ends up with monolithic spaghetti any way.

So it looks like they are all waiting for my Copute to take over the world.

But how for example can one share files between processes, i.e. how to handle state that must be shared? Well state is what you want to push out to the highest most functions any way in order to maximize referential transparency, so state must be handled with permissions (i.e. capabilities), and more finely grained than Unix's ower, group, other tuple. So shared state should be owned by an interface and that interface should decide which permissions it requires for inquiry (DDoS not denied), read, and write access. For example, the user login interface might expose an interface that allows an interface to sign some state which is stored encrypted with the user password, which can only be retrieved by that interface's signature. The web browser could then for example store cookies securely.

Singularity doesn't use memory protection because it assumes only trust code is run and there are no bugs in the trusted verification. Otherwise one would want some memory protection so that processes can not access the memory of other process.

http://esr.ibiblio.org/?p=2635&cpage=1#comment-280393

Shelby aka Jocelyn wrote:Linus is correct that micro-kernel degenerates to a monolithic mush, unless there exists the system-wide solution that ultimately has to come from the language (virtual machine) layer. I have deduced that issue is most fundamentally that sharing a resource is security hole for the same reason that it kills inter-function/process composability scaling, i.e. it removes referential transparency. Apparently Microsoft Research is aware of this with their work on Singularity.

One of the big stumbling blocks for GNU Hurd (since at least 2002, if not 1992) appears to be that Unix is based around file/stream handles for accessing resources and devices. One referential INtransparency problem is how to pass around these handles securely between user mode processes:

http://www.opensubscriber.com/message/l4-hurd@gnu.org/12608235.html
http://www.coyotos.org/docs/ukernel/spec.html#frontmatter-2.2

Apparently the lack of asynchronous form of inter-process procedure call (IPC) in the micro-kernel complicates or reduces the performance of any solution in their view. Also the inability of the micro-kernel to enforce capability meta-data on shared resources (and the IPC call itself) is claimed to be a security hole and hindrance to solve in the user process layer.


Last edited by Shelby on Wed Sep 29, 2010 8:29 am; edited 2 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: - Page 3 Empty Re: Computers:

Post  Sponsored content


Sponsored content


Back to top Go down

Page 3 of 11 Previous  1, 2, 3, 4 ... 9, 10, 11  Next

Back to top


 
Permissions in this forum:
You cannot reply to topics in this forum