GoldWeTrust.com
Would you like to react to this message? Create an account in a few clicks or log in to continue.

Computers:

5 posters

Page 1 of 11 1, 2, 3 ... 9, 10, 11  Next

Go down

Computers: Empty Computers:

Post  Shelby Thu Sep 10, 2009 10:47 am

In this thread, let's discuss computer technology and market trends.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Instant Boot of Windows (ASRock technology)

Post  Shelby Thu Sep 10, 2009 10:51 am

You know any other providers of this type of technology?

Apparently on desktop motherboards (maybe their notebooks also?) ASRock store a virgin boot in the "Hibernate" shutdown option (see "Hibernate" on most notebooks), and then next time you boot, they restore from that saved "Hibernate" image even though you chose "Turn Off" when you "Shut Down" each time after the first "Hibernate":

http://www.asrock.com/feature/InstantBoot/index.asp

That is versus the notebook way of storing the dirty state of the OS (what ever you have loaded) when you "Hibernate".

So actually I think you could save any state to their instant boot, not just a virgin one. That is kind of cool if you want to always INSTANTLY boot to certain apps already running in their virgin state.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Capacitor aging can make your compute unstable and consume 200% more power

Post  Shelby Thu Sep 10, 2009 11:01 am

Just 50% capacitor aging can cause your computer to consume double the power.

The solid Japan style last 2.5x as long, and that is if the non-solid type did not contain a faulty electrolyte:

http://www.hardwaresecrets.com/article/595/1

Here is the infamous story of exploding capacitors due to espionage:

http://www.pcstats.com/articleview.cfm?articleID=195
http://www.hardwaresecrets.com/article/48

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty For exponential Google growth, expose APIs to your Voice+Video chat plugin

Post  Shelby Thu Sep 24, 2009 12:19 pm

Subject: For exponential Google growth, expose APIs to your Voice+Video chat plugin

Justin, I posted to your important blog:

http://juberti.blogspot.com/2009/09/voice-and-video-chat-in-orkut.html#comments

Shelby wrote:Hi Justin, I am in a developing country in Asia, and the world needs PROGRAMMABLE video+voice chat in the PLATFORM-INDEPENDENT browser far more than you may realize.

I am urging your team that it is in the EXTREMELY best interest of Google for you expose an API to this browser plugin, and urgently. I am also urging anyone reading this to get busy on creating an open source plugin, if Google drags their heels at all.

Google's income growth is directly correlated to the ability of the web to remain decentralized and grow exponentially in use, i.e. more diversity of interfaces targeting more markets than Google 1000s of employees can ever possibly understand and envision. I can tell you right now I know how to make use of your plugin to displace Yahoo Messenger in Asia, but you will never get there with your current strategy of trying to control access and control interfaces to your favored Google applications. You will be exponentially too slow and your interfaces will never be diverse enough. You are doing amazing work (and obviously extremely intelligent and talented) that is missing it's fully growth potential. For example, you haven't even integrated with Google Talk yet, which is crying shame, given that Talk can be embedded in a web page, but Gmail Video Chat can not (yet).

The biggest potential economic threat to Google is not losing control of the cookie or the user's id and databases (you will lose control of these any way in billions of person developing Asia, because most people share a computer in net cafes, and the ease of social engineering in these markets to steal logins...to large degree the concept of a firm login and password is not valid here as many people create multiple identities at any whim and do not care about losing an identity...just go study friendster or YM usage), BUT RATHER the highest potential threat is that Microsoft (and soon with Yahoo's help) could possibly co-opt your ability to innovate across all platforms and drive advertising revenue growth. For example, most net cafes in Asia do not realize that an OEM license per computer does not make them legal with Windows. MSFT looks the other way for now to build market share, and to hope to create so many platform dependencies (i.e. YM doesn't run in a browser, won't run on Linux, is #1 in Asia and will soon gain unbreakable market share inertia), that as developing world becomes more affluent they can begin to enforce licensing and/or Asians transistion to their own private computers.

It is not just about it being free to use, but it even exponentially more about the freedom to innovate on top of what Google does with it's limited resources and knowledge.

The interface is the key to customer loyalty, the (friend, etc) database will demand to be either open or the net growth will stagnate. Remember exponential growth requires nominally orders-of-magnitude more change as it progresses. This on the scale of mesh networks, instead of spokes and hubs. Many people forget that on the 30th day the Water Lily covers the remaining 50% of the pond! So interface diversity must increase, not be controlled in any way, for Google to reach it's ultimate exponential peak. The challenge of any large thing, in order to maintain rate of growth, is to maximize innovation and freedom. Google must not forget that it's revenue is directly correlated to the freedom and exponential diversity of growth of the internet, with a few more billions of people to come online in the next years.

It is great the Google is providing server resources, but it would be exponentially faster correlated income growth for Google if they worked more on mesh networking the social processes, so that distributed databases could reside on the clients. But that is slightly longer-term, with the immediate big wins that can be made by exposing everything you do either as open source or APIs at every layer of the code you do. For example at the lowest layer, I should be able to point two of your plugin instances at each other and then be able to control the voice+video chat, not even have to use your play buttons or your friend network.

PLEASE OPEN THIS UP TO THE PROGRAMMERS OF THE WORLD. We will do thousands if not millions of things with it that you never could possibly have done with your own team and your own Google apps and services.

Obviously, you will need to either provide (for free) the Google server resources to work-around firewalls, or provide an API for the plugin that enables declaration of a server that provides the resources.

- Shelby Moore
(contributing or sole) programmer of some million user commercial software, e.g. CoolPage (.com) before friendster & mypsace existed, Corel Painter (formerly FDC Painter), WordUp, DownloadFAST.com, etc.


I tied my comments here into more general economic concepts here:

http://esr.ibiblio.org/?p=1247&cpage=1#comment-240223

Shelby wrote:Pointing fingers and debating non-exponential growth paths is not so productive, as this is entirely natural cycle of exponential growth, peak, and decay:

http://www.coolpage.com/commentary/economic/shelby/Bell%20Curve%20Economics.html

Those who want to continue to economically grow (I am only 44!) versus decay, should understand the exponential function and thus the markets and strategy they need (I am trying this post into OPEN SOURCE in big way, let’s not forget this is Eric Raymond’s blog, the famous of author for Cathedral and Bazaar!), even imho Google needs to understand better the exponential function:

http://juberti.blogspot.com/2009/09/voice-and-video-chat-in-orkut.html#comments

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty DNA in software: The Game of Life

Post  Shelby Thu Oct 01, 2009 11:16 pm


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Hand-drawn look 3D rendering in REAL-TIME (meaning 10+ renderings per second)

Post  Shelby Sat Oct 03, 2009 9:38 pm

Now (2009) finally in real-time PC games:

http://en.wikipedia.org/wiki/Street_Fighter_IV#Visuals
http://en.wikipedia.org/wiki/Cel-shaded_animation
Low quality video example: http://www.amazon.com/gp/mpd/permalink/m8H9C4VJLQXT0 (can't really appreciate the game at that low resolution)

Computers: Street10Computers: 94370910

12 years after I did it in real-time on much slower computers in my Art-O-Matic program (I think I may have been first to do it on PC in commercial product):

http://www.coolpage.com/aom.html

Computers: Aom_screen_shotComputers: Aom_logo
Computers: Max_reboComputers: TreeComputers: Machine_gunComputers: SandalComputers: JawaComputers: Potatoe_head

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty P2P lending...hmmm I wonder how the banksters will react to this

Post  Shelby Mon Oct 12, 2009 11:49 pm


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Ultimate virus and ad free browsing

Post  Shelby Fri Oct 16, 2009 8:21 am

Get:

* Baseline Shield from eazsolution.com
* Adblock Plus Firefox addon, With EasyList+EasyPrivacy filters

The baseline makes sure even Flash LSO objects don't persist between baseline restores (i.e. reboot) and the AdBlock filters most of the stuff. Blocked 15 out of 40 resource hogs on Yahoo Finance! Yahoo Finance loads nearly instantly now, even over my slow connection.

Browsing is so much incredibly faster and no viruses can every persist past a reboot!

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Hyperlink Relevance Optimization Via Auction Competition

Post  Shelby Wed Oct 21, 2009 1:41 pm

A patentable algorithm that I was working on before 2006. I no longer believe in patents and am releasing this into the public domain.

=======================
1. Display all, or subset, of list as links ordered, e.g. highest to lowest (claim 1) or weight for relevance metric (claim 19).

2. Consider all reader data (input to the bid price offered calc), not just actions on destination site, e.g. demographics, previous links visited, categories (semantic web) visited, etc.

3. Order a list of multiple links by ordering method of claim 1, where all the links in the list are specified unordered in source document before being re-order on load or display, and specified unordered as input to any query (from the source document or generator of source document) which returns an ordered list or sub-list, where the bid price database...

4. Claim 19 metric could be significant reader preference for other order, as measured by user requesting more candidates and the reader's activity patterns an those candidates (time spent, pages visited, etc).

THE KEY IS GETTING DATA ABOUT EACH LINK CLICKED. SO KEY IS WHAT MOTIVATES AUTHORS TO NOTIFY US FOR EACH LINK CLICKED, AND LET US DIRECT PRIORITY OF ALTERNATIVES. ALTERNATIVES MAY EITHER BE INTRA-LINK (destination alternatives) OR MULTIPLE LINKS.


Title of Invention

Hyperlink Relevance Optimization Via Auction Competition


Abstract

Method and system for efficiently optimizing the relevance of hypertext links by arbitrating auction competition among two or more candidate destinations for a hyperlink.


Background - Field of Invention

This invention applies to any link between hypertext documents, i.e. to any hyperlink. Hyper Text Markup Language, i.e. HTML, is an example of a hypertext document format, where hyperlinks are encoded with anchor, i.e. <a>, tags.

The utility of large networks of linked hypertext documents is highly correlated to the ability of readers to efficiently locate desired information. To locate information, readers rely on hierarchal catalogs, search engines, and hyperlinks between relevant documents.

Hierarchal catalogs, such as Yahoo(tm), are hierarchally (and often alphabetically intra-) ordered categories, each containing a list of hyperlinks, maintained by human editors.

Search engines employ automated programs which crawl the web of hyperlinks in said network, and catalog or associate the documents to hash tables, which attempt to return the most relevant documents for each search query, and order the documents from highest to lowest relevance. Search engines, such as Google(tm), that correlate relevance with the quantity and quality of hyperlinks pointing to a destination document, are known to be of high utility, as evident by their popularity. However, search engines which depend on quantity and quality of hyperlinks pointing to a destination document, are dependent on the integrity and judgment of the authors of the source documents, which contain hyperlinks, to determine relevance for readers.

With both hierarchal catalogs and search engines, the reader is usually presented with a list of hyperlinks for each catagory or query respectively, and the reader must decide relevance based on a summary presented alongside of each hyperlink and activate each hyperlink to read (or at least scan) each destination document. Whereas, each hyperlink between documents gives the reader only one choice-- the one chosen by the author of the source document which contains the hyperlink. Thus, for each hyperlink between documents, the relevance or utility for the reader of the destination document depends on the integrity and judgment of the author of the source document.

Thus, in this field of relevance of hyperlinks, the reader has no direct control the hyperlinks themselves, thus no direct control over the relevance of said hyperlinks, and no indirect influence over a specific hyperlink. The reader's indirect influence over the hyperlinks is at most the aggregated (to all hyperlinks pointing to a destination document) correlation of the search engine order for search engines which correlate relevance to quantity and quality of hyperlinks pointing to a destination document.


Summary

This invention allows two or more candidates to be specified for the destination document of a hyperlink, and a candidate is (or subset of candidates are) chosen for the reader via an instant auction between the said candidate destination documents. Because the income of the auction is shared with the author of the source document, then the said author has an incentive to maximize the auction. Since the return on investment in the said auction for a said candidate destination document, is correlated to the relevance for the reader of the said destination document, over a large enough sample of readers, thus this auction increases the probability of maximizing relevance for the reader over a large enough sample of readers and hyperlinks employing this invention. This relevance optimization happens automatically from the perspective of the reader, yet the reader's indirect influence is specificly correlated to each hyperlink by the economics of relevance, given the granularity of the auction being per-hyperlink.

Since said candidate destination documents may also contain hyperlinks employing this invention, i.e. they may be source documents, thus even non-commercial documents may gain income as source documents from these per-hyperlink auctions, and then use the proceeds to compete in auctions as candidate destination documents. Since over a large enough sample of hypertext documents in general (not just those employing this invention), the relevance of a document, that is the destination of a general hyperlink, is correlated to the relevance of the hyperlinks it contains, the optimization of the relevance of the destination of hyperlinks, also optimizes the relevance of the source documents containing those hyperlinks. The implication being that the relevance optimization of this invention snowballs on itself for the entire said network, and even non-commercial documents gain relevance. This invention thus introduces the reader into the determination of relevance via per-hyperlink granularity competition between destination documents to obtain readers.


Background - Prior Art

The prior art includes industry group discussions proposing multiple destinations for hyperlinks, with the intention that each candidate be a mirror or exact copy of the same destination document[1]. The purpose of said proposal being that if any one candidate destination is unavailable, then the other candidates are exact substitutes. The method for choosing from the candidate destination documents is to not choose a candidate which is unavailable.

The prior art also includes the "pay per click", i.e. PPC, advertising model, often known as the "affiliation" model, wherein the advertiser who is the owner, or owner's agent, of the destination hypertext document pays the owner, or owner's agent, of the source hypertext document, for each activation of the hyperlink which causes the said destination document to be displayed.

The prior art also includes a variation on the PPC model, wherein a list of hyperlinks is displayed, and the owners of the destination document of said hyperlinks compete with bid prices, such that the said destination documents, with the highest bids, have their hyperlinks appear in said list, and the hyperlinks are ordered from highest to lowest bid prices. The said lists are displayed as, or as an appendage to, search engine results. Or said lists are displayed on hypertext documents as an appendage to the authored content of said hypertext document, e.g. Google AdSense (tm). Unlike this invention, the said list of hyperlinks is perceived by the reader to be advertising, because the said list is an appendage to the relevant authored content, and the choice of hyperlinks in said bid price action is not specified by the author of the said relevant content, thus any indirect influence of the reader, via the economics of relevance of said auction, is mitigated by the initial irrelevance, i.e. not chosen by said author, of the choice of candidates in the said auction. The said list of hyperlinks is not competing for any hyperlink in, and is an addendage to, the said relevant authored content. One of the main challenges facing this model today, often called "click fraud", appears to be that the owner of said relevant authored content reader, has a financial incentive to increase clicks on the said list of hyperlinks, but said advertiser does not have incentive to receive clicks where reader was induced to click with a non-relevant inducement. An example of a irrelevant inducement, is a message in the said relevant authored content which asks the reader to click the advertisements on the page. Since the said list is not directly chosen by owner of said relevant author content, and thus said bid prices are not correlated to messages in said relevant authored content, then the said auction has no direct influence to disincentivize said irrelevant inducements. Unlike the complete synergy of this invention, there is tension between incentives for the advertiser and the owner of page displaying the advertising. Since in this invention the bid prices of auction competition, and reader's economics of relevance influence, is correlated per-hyperlink in the said relevant authored content, then irrelevant inducements are disincentivized.

The aforementioned prior art does not contain the invention of maximizing relevance of a hyperlink by enabling competition between non-substitute candidate destinations. Nor does the prior art incentivize the author of the source document to specify multiple candidate destinations for each hyperlink with a goal of maximizing relevance. These differences from the prior art are not trivial, nor obvious, as evident by the fact that hyperlinks have existed in prior art for decades. This invention is not obvious, because it is non-intuitive and useless without this invention, for an author to specify multiple destination documents for the same hyperlink, when the said destinations are not exact copies of the same document. Without an automated means to choose from the candidates, or a way to display the multiple candidates, then multiple candidates is a useless burden for the author. Our invention solves this dilemma by introducing competition to automate for the reader and author, the choice of candidate, in a way that maximizes relevance for author and reader, without any additional effort for the reader.


Description

Although a description of a preferred embodiment follows, this invention is not limited to this description, and continues to include all possible variants and embodiments claimed.

World Wide Web:

The invention is both a method, and system for implementing the method, for encouraging multiple candidate destinations to be specified for a hyperlink, and enabling competition to determine which candidate is utilitized for the hyperlink. In this description we will describe a preferred embodiment of the system which implements this method in the context of the World Wide Web, i.e. WWW, which is currently the most popular, largest network of hypertext documents.

HyperText Markup Lanuage:

The preferred embodiment is in HyperText Markup Lanuage, i.e. HTML, the most popular document format on the WWW. And the invention could also be embodied in similar ways for other document formats which also allow hyperlinks, e.g. Flash, XHTML, PDF, etc.. In HTML, hyperlinks are specified with anchor tags ("<a>"), with an "href" attribute that specifies the Uniform Resource Locator, i.e. URL, of the destination document.

Candidate Destination URLs Specification:

Although not claimed by this invention, HTML documents must specify two or more candidate destinations for a hyperlink, because hyperlinks containing only one candidate destination are ignored by the invention. One method for doing this was proposed in the prior art[1], wherein an "althref" attribute is specified for the anchor tag ("<a althref>"), then the "href" specifies the default candidate destination URL and the additional candidate destination URLs are specified in the "althref" in url-encoded format, with each candidate separated with a standard delimiter, e.g. a space. The method can be refined to avoid the potential delimiter collision problem, by using a separate attribute for each candidate destination URL, e.g. "althref1", "althref2", etc.. Numerous other possible methods include embedding the additional candidate destination URLs in a scripting array. These methods have the advantage that when this invention is not applied to the hyperlink, the hyperlink continues to function as a normal hyperlink for the default destination URL specified in "href".

The HTML document specification for candidate destination URLs of a hyperlink may be augmented by performing claim 20, to generate additional candidate destination URLs relevant to the document, relevant to some portion of the document such as the text contained in the anchor tag, or relevant to the pre-existing candidate URLs.

Chose Candidate Server-Side:

One disadvantage of specifying the candidate destination URLs in the HTML document is that the destination chosen by result of this invention is not explicit in the HTML document. To solve this, the list of candidate destination URLs can be specified as aforementioned, then claim 4 of the invention is applied before serving the HTML document to the client software program of the reader, thus removing all but the chosen candidate destination URL from the HTML document served, so it will contain a normal anchor tag with an "href" attribute for the chosen candidate destination. The HTML document is parsed, query is sent to the centralized server program of this invention, and then HTML document edited to remove all candidate destination URLs except the chosen destination returned from the query result. An alternative, which will also illustrate the detail of sending the query in either case, is to encode the HTML document on the server using a server-side scripting language, e.g. PHP, and then the query is sent to the centralized server program of this invention, and the query result is written into the hyperlink of HTML document output. An example PHP coding follows, with the URL of the source HTML document (from claim 9) and account unique identifier (from claim 10) also sent in the query. Note that the cache of claim 5 is performed by the PHP, webserver, or other proxy cache on the output HTML document:

Code:
   <?php
      $data = "candidate1=" .  urlencode( "http://url_of_candidate_1" );
      $data .= "&candidate2=" .  urlencode( "http://url_of_candidate_2" );
      $data .= "&candidate3=" .  urlencode( "http://url_of_candidate_3" );
      $opt = "&source_url=" . urlencode( "http://url_of_document_being_served" );
      $opt .= "&account=" . urlencode( "account unique identifier for owner of document being served" );
      $data .= $opt;
      $ch = curl_init( "http://url_of_central_server_chooser?" . $data );
      curl_setopt( $ch, CURLOPT_FAILONERROR, 1 );
      ob_start();
      $success = curl_exec( $ch );
      $response = ob_get_contents();
      ob_end_clean();
      curl_close( $ch );
      if( $success && !empty( $response ) )
      {
         echo "<a href='http://url_of_central_server_query_claim_6?redirect=1&chosen_url=", $response, $opt, "'>";
      }
   ?>
Claim 23 can be performed to cull erroneous or inaccessible candidate URLs before the query is sent.

Note the above server-side script performs the query of claim 6, by inserting the query into the "href" of the hyperlink output and setting the redirect flag. An alternative is to insert the chosen URL in the "href", set an attribute on the anchor tag which indicates that query for claim 6 must be sent by a client-side script. The necessary client-side script can be deduced obviously from the description which follows.

However, the disadvantage of performing claim 4 server-side is that the candidate destination URLs are not specified in the HTML document served to the client software program of the reader, which precludes claim 22, precludes claim 8, and obscures the author's candidates from WWW crawlers.

Chose Candidate Client-Side:

To perform claim 4 client-side, a script is added to the HTML document or to the client software program displaying the HTML document. The script is called when a hyperlink is clicked or otherwise activated by adding the script as a handler for the appropriate event, e.g. onclick. The script gathers the candidate destination URLs from the "href", "althref", and/or any script array variable, sends the query to the centralized software program, and the "href" attribute of the hyperlink is set to the chosen destination URL returned in the query result. After setting the "href", then the script performs claim 5 by setting a flag to record that the query result is cached, so the query is not sent again. This script could also be called on other events, which need the "href" to be resolved but do not cause the destination document to be displayed, e.g. onmouseover since "href" is often displayed in the status bar of many WWW browsers. If the script has been called for an event which will display the document which is the "href" of the hyperlink, then claim 8 may be performed instead of sending separate queries for claim 4 and 6. An example W3C DOM Level 2 compatible script follows, except the DoServerQuery() is not shown, as the query string is same as server-side logic above with the addition of the flag to indicate claim 8, but coded with a well known art (to someone skilled) in client-side AJAX technique, instead of the CURL used on server-side above:

Code:
   <script language="javascript">
      var cached_url = false;
      window.document.getElementsByTagName( "body" ).item( 0 ).addEventListener( "click",
         function( e )
         {
            if( typeof( cached_url ) == "boolean" ) return;

            // Search for parent anchor tag, if any
            for( var target = e.target;
               target && target.nodeName.toLowerCase() != "a";
               target = target.parentNode );
            if( target == null ) return;
            
            // Populate array of candidate destination URLs
            var urls = new Array();
            var attr = target.attributes.getNamedItem( "href" );
            if( attr != null && attr.value != "" )
            {
               urls.push( attr.value );
               attr = target.attributes.getNamedItem( "althref" );
               if( attr != null && attr.value != "" )
               {
                  urls = urls.concat( attr.value.split( " " ) );

                  // Send query for claim 8 to server
                  $cached_url = DoServerQuery( urls );
                  target.setAttribute( "href", $cached_url );
               }
            }
         },
         true/*capture event before target gets it*/ );
   </script>

Claim 23 may be performed to cull erroneous or inaccessible candidate URLs before the query is sent.

A disadvantage of client-side scripting is that it may not work in all client side programs, e.g. WWW browsers.

Centralized Server Program

The previous described queries are received over the network by the centralized server program, processed, and a result is returned over the network in the query response.

The three possible queries previously described correspond to claims 4, 6, and 8. And claims 8 and 9 add to the received query data respectively: the URL of the document containing the hyperlink and the unique identifier of the account of the entity that owns the said document.

Claim 4 Query:

In the query for claim 4, the received data are the candidate destination URLs, the said document URL, and the said account unique identifier. The cache hash table is assessed with the input document URL as the hash key. If a record is found and the timestamp is not expired, then the record values are used.

Otherwise, claim 23 can be performed to cull erroneous or inaccessible candidate URLs. Claim 14 is performed to obtain the previous specified bid prices for each of the candidate URLs from a cached hash table. Per claim 7, any candidate URL which does not exist as a hash key in the hash table is assigned a bid price of zero. The method of claim 1, optionally incorporating the variants of claims 2, 18, 19, and 20, is performed to choose a single destination URL from the candidates. The bid price for the chosen destination URL may be altered by claim 21. Claim 5 is performed by storing the timestamp, resultant bid price, the input candidate URLs, and the chosen URL in the cache hash table, with the input document URL as the hash key.

The chosen URL is returned as the query response. We are not describing the embodiment that can return a sub-set of candidate destination URLs, as this is an obvious extension of the current embodiment. The use of the multiple candidate destination URLs response on the client-side requires incorporating claim 22, which is an advanced embodiment that is an obvious extension to this preferred embodiment for someone skilled in the art.

Claim 6 Query:

In the query for claim 6, the received data are the single chosen destination URL, an optional redirect flag, the said document URL, and the said account unique identifier. This query can be distinguished from claim 4, because some of the input names are obviously different. The bid price is retrieved from the record of the cache hash table, where the input chosen URL is the hash key. The timestamp is ignored and the record must exist, else the query response returns an error meaning that the corresponding query for claim 4 was not received previously. Claim 15 is performed using claim 18 by getting the destination account record, or account unique identifier as hash key for an intermediary hash table that contains the record, from a hash table where the input chosen destination URL is the hash key. The account record is debited by bid price. Claim 11 is performed and the input account, if a record for it exists in the database of the centralized server program, is credited with a portion of the bid price. The remaining portion is retained by the centralized software program. If the input specifies redirect flag, then the query does not return a response and instead redirects to the input chosen URL, else the query returns a success response.

Claim 8 Query:

In the query for claim 8, the received data is same as for the query for claim 4, with the addition of a flag. This query is processed as if it is the query for claim 4, except before returning the chosen URL in the query response, the query is processed as if it is the query for claim 6 without a redirect or response, then the chosen URL is returned in the query response.

Bid Price Specification:

Mention account creation and query specification.

Bid Price Metric Tracking:

Mention tracking by centralized server.


Bid Price Queries:

Mention specialized queries for advanced metric tracking.



Claims

1. Given a hyperlink that specifies two or more candidate destinations, where each destination specification uniquely identifies a hypertext document, and given a method that specifies the bid price of a destination of a hyperlink, I claim the method of choosing a destination, or choosing a subset of said candidate destinations, from the said candidate destinations; by selecting from said candidate destinations in order of highest to lowest said bid price; where destinations with no bid price are assigned a bid price of zero, and destinations with equal order may be ordered with any method, such as random order or the order assigned by the author of the hyperlink.

2. Given a method, that specifies the bid price of a destination of a hyperlink, and depends on a sample of readers coming from said hyperlink to the hypertext document which is the said destination, I claim the method of claim 1, where said candidate destinations whose current bid price would not cause them to be chosen and which depend on said sampling method, are ordered highest with a portion of the frequency, of instances of said hyperlink, sufficient to operate the said method which depends on said sample; however where heuristic constraints on said frequency portion are intended to prioritize the maximization of total income from said bid prices over all instances of said hyperlink.

3. I claim the method of claim 1, where said bid price is denominated in tradeable units of a physical asset, such as tradeable digital certificates of ownership of a quantity of allocated or unallocated precious metals.

4. I claim the method and system of claim 1, where a network's unique identifiers, e.g. the URLs, for said candidate destination specifications, or aliases for said candidate destination specifications, are sent over a network as a query, by the client software program which is displaying the said hyperlink, or by the server software program which is serving the hypertext document containing said hyperlink, to a centralized software program running on a server computer; so that said centralized software program performs the method of claim 1 and returns the said choice, or choices, query result to the said client software or server program.

5. I claim the method and system of claim 4, where said query and result may be cached by said client software program, by any proxy software program on the network, and/or by the said centralized software program; and the said cached results are used instead of repeating the method of claim 1.

6. I claim the method and system of claim 4, where when the hypertext document which is the chosen destination of said hyperlink will be displayed as a result of activating said hyperlink, then said client software program sends said choice in a query, to said centralized software program; and if the client sends a redirect flag in the query, then the centralized software program redirects to the hypertext document which is the chosen destination, else a success query response is returned.

7. I claim the method and system of claim 6, where said query is not sent when said bid price, for the said chosen destination, is known to be zero.

8. I claim the method and system of claims 4 and 6, where said client software program may delay sending the query of claim 4 until it is time to send query of claim 6, then only query 4 is sent but with an extra flag to indicate to said centralized software program that the hypertext document which is the said chosen destination of said hyperlink will be displayed by said client software program upon receipt of result of said specialized query of claim 4; thus after sending said result for specialized query of claim 4, said centralized software program sends itself the query of claim 6.

9. I claim the method and system of claims 4 and 6, where a said network's unique identifier, e.g. the URL, for the hypertext document that contains said hyperlink, or an alias of said network's unique identifier, is sent in the said queries.

10. I claim the method and system of claim 4 and 6, where a unique identifier, that identifies an account provided by the said centralized software program, is sent in the said queries.

11. I claim the method and system of claim 10, where when query of claim 6 is received by said centralized software program, the said account is credited with a portion of said bid price.

12. I claim the method and system of claim 1, where said bid price is returned by a query sent over a network to a centralized software program running on a server computer, where said centralized software program is also responsible for any query of a said network's unique identifier for the hypertext document which is the said candidate destination; and the said unique identifier is specified in the query.

13. I claim the method and system of claims 9, 10, and 12, where either or both of the unique identifiers of claims 9 and 10 are also sent in the query of claim 12.

14. I claim the method and system of claim 4, where said centralized software program provides the said bid price for said candidate destination; where said bid price is cached by said centralized software program as result of being specified by the given method.

15. I claim the method and system of claims 11 and 14, where an account is debited by the full amount of said bid price and the debited account is another instance of an account provided by the centralized software program, and which is associated with said chosen destination; where said association is a pre-existing record stored on said centralized software program, and the record contains both said unique identifier for said debited account and a said network's unique identifier, or an alias of network's unique identifier, for the hypertext document which is the said chosen destination; and where said centralized software program retains the portion of the bid price not credited to the said credited account of claim 11.

16. I claim the method and system of claims 2, 10 and 14, where said centralized software program calculates said bid price, to optimize a desired target metric by statistically sampling the hyperlinks activated by readers of the hypertext document which is chosen destination; where reader is identified by a said network's unique identifier that correlates to the reader's instance of said client software program, e.g. IP address; or where reader is identified by any other means of the said client software program, e.g. HTTP cookie.

17. I claim the method and system of claim 16, where said sampling may include specialized queries sent to the said centralized software program by the scripting programs in the hypertext documents which are destinations of said sampled hyperlinks, e.g. a notification of a purchase price earned for a product offered by said hypertext document; and where said optimization of desired target metric may incorporate the data provided by said specialized queries.

18. I claim the method and system of claim 15, where said centralized software program may store in said debited account the said associations to a plurality of said destination hypertext documents; and said debited account is associated with a password, and said password must be specified to associate said destination hypertext documents; and said crediting of an account does not require a password; and said password is required to debit said account to an external system.

19. I claim the method and system of claim 1, where the said order of said candidate destinations may be altered by giving some weight to a metric that orders the candidate destinations by their relative importance or relevance; for example for the purpose of altering said order only, then multiplying each said bid price by the number of known hyperlinks on said network that specify the candidate destination that corresponds to the bid price.

20. I claim the method and system of claim 1, where given the said hyperlink has only one candidate destination, or to supplement the list of said candidate destinations for a hyperlink, additional candidate destinations may be generated algorithmically; for example given that said hyperlink is expressed in HTML, then generating the results of a search engine algorithm for the text contained within tags of said hyperlink, or for example generating the candidate destinations of all hyperlinks in the hypertext document which is the given candidate destination of the said given hyperlink.

21. I claim the method and system of claim 1, where for purposes other than selecting the candidate destinations, each bid price for each chosen destination, may be reduced exactly to, or some fixed number of units or percentage above, the next lower bid price in the list of candidate destinations ordered by unreduced bid price.

22. Given a hyperlink that specifies two or more candidate destinations, I claim the method and system where when it is desired that a destination of said hyperlink will be displayed as a result of activating said hyperlink, then the software program which is displaying the hyperlink, will display hyperlinks for all the candidate destinations.

23. I claim the method and system of claim 1, where the client software program and centralized software program eliminates candidate destinations which are errors or are not accessible over said network.


References

[1] put link here for althref discussion thread

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty FlexCanvas Algorithm

Post  Shelby Wed Oct 21, 2009 1:43 pm

A patentable algorithm that I was working on before 2006 (see prior post in this thread also). I no longer believe in patents and am releasing this into the public domain.

================
FlexCanvasTM
================

The FlexCanvas algorithm alters the layout of a fixed position document, so that the width of the overall document may flex to fit a desired width, such as the width of the display window. The height of the document is dependent on the width fitting, and flexs as necessary. Optionally, the horizontal and vertical orientation of the algorithm may be transposed to achieve fitting of document height instead.

*Note: the most recent profound "eureka" discoveries are marked with an asterik (*).

(1) For each displayable object on the document canvas, FlexCanvas inputs the object's position, bounding rectangle, and whether the object is flexible. An object is flexible if it's bounding rectangle can trade width for height (or vice versa) without adverse consequence up to some reasonable extreme. An example of a flexible object is text possibly with embedded images, which have a flowed layout, e.g. as word processors and HTML flowed layout can do.

(2) The goal of FlexCanvas is to automatically, or semi-automatically, slice up the document into a table of columns and rows, such that the cells which contain flexible objects can flex and enable the entire table width and height to flex, while maintaining a relative layout order that approximates the intended design of the user, and without causing non-intersecting objects to intersect.

*(3) Of all the imaginary rectangles that could be drawn on the document canvas without intersecting any displayable objects, FlexCanvas finds the candidate set of all those imaginary rectangles which have their vertical sides touching the right side of an object (or for a right-to-left flowed layout, then touching the left side), their horizontal sides touching the bottom side of an object (or for a bottom-to-top flowed layout, then touching the top side).

*(4) From the candidate set of imaginary rectangles, FlexCanvas must select a set of non-intersecting rectangles which cover the entire area of the document canvas. The quality of the algorithm for choosing this set from the candidate set is crucial to the quality of the result, as per the goals of step #2. Description of this selection algorithm begins in step #8.

(5) The chosen set is converted to the analgous of an HTML table, where each imaginary rectangle becomes a cell in the table, and the objects contained within the rectangle are given fixed positions relative to the cell's top, left corner (or which ever corner positioned layout is based on), which are the objects' offsets from the corresponding corner of the rectangle which contains them.

(6) The conversion of the imaginary rectanges to table cells, requires the use of rowspan cell attribute on a cell when it's top side is higher than and bottom side is lower than the top side of another cell (or vice versa for bottom-to-top layout tables):
Code:
  _______
 |      |
 | cell |______
 |      |      |
 --------      |
        |      |
        --------

Similarly, the use of colspan is required when a cell's left side is more leftward than and right side is more rightward than the left side of another cell (or vice versa for right-to-left layout tables):
Code:
  _______
 |      |
 | cell |
 |      |
 --------___
    |      |
    |      |
    |      |
    --------

(7) All of top and left side coordinates are collected from the chosen set in ascending order (or vice versa for alternative direction layout tables), where the top sides are the row starts and the left sides are the column starts of the table. For each row, the cells are output in column order for rectangles whose top side is on the row, with the rowspan set to the number of other rectangles which insect as per first illustration above, and the colspan set to the number which intersect per the second illustration. For each gap in the row where there are no rectangles whose top side is on the row, then an empty cell (or containing a 1x{gap width} invisible object to prevent cell formatting collapse) is output with width set to gap width and colspan set to the number of other rectangles which intersect per the second illustration.

Note that steps #5, #6 and #7 are similar to the Cool Page layout algorithm using tables (Browser version 3.0 compatibility mode) to layout objects with fixed position, but where the widths are not set for cells containing objects, and each cell does not necessarily contain a single object but represents a rectangle which can contain multiple objects.

(8) Prior to steps #5, #6 and #7, the chosen set is selected from the candidate set. There are numerous possible algorithms, some fully automatic, and semi-automatic ones which render variants to the user to choose from.

(9) The goal is to rank multiple chosen sets by heuristic likelihood to meet goals of step #2.

*(10) Find the priority set of all imaginary rectangles from the candidate set that contain only one object that is flexible, plus imaginary rectangles that contain only one flexible object and all other contained objects' right side are leftward of the left side of the flexible object (or rightward and right side for right-to-left layout tables).

If the there is a minimum width that the flexible object can flex to, then the contained objects's right may be leftward of the sum of left side and this minimum width (or left side rightward of the difference of right side and this minimum width for right-to-left layout tables). For example, text objects typically are limited to the width of their widest word, unless the word can be split via hyphenation.

*(11) Find all possible sets that have one imaginary rectangle, from the priority set, for each flexible object in the document. Rank the possible sets by the descending order of the average area of the imaginary rectangles in the set.

(12) For each set in the ranked sets, find the finishing set from the candidate set, that has maximum average area of the imaginary rectangles in the set, which covers the area of the document canvas not covered by the corresponding set in the ranked set.

(13) Make a ranked set of chosen sets, by combining each finishing set with it's corresponding set in the ranked set.

(14) Either automatically select a chosen set, such as the highest ranked or by some heuristic qualitative measurements, or render N of the top ranked chosen sets to user.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Building the best open source game engine & why

Post  Shelby Thu Oct 22, 2009 10:23 am

Building the best open source game engine & why

Find attached the following excellent technical overview of the challenges for creating the ideal open source game engine and scripting language. The main point is that we have to build in error-free resistance and concurrency at the language+compiler level:

http://en.wikipedia.org/w/index.php?title=Unreal_Engine&oldid=315569675#cite_note-32
http://lambda-the-ultimate.org/node/1277#comment-14196
http://graphics.cs.williams.edu/archive/SweeneyHPG2009/TimHPG2009.pdf

Other interesting posts:

http://lambda-the-ultimate.org/node/3637#comment-51475
http://lambda-the-ultimate.org/node/3637#comment-51597 (my rebuttal to Tim Sweeney, #2 game programmer in world)

List of existing freeware/open source engines, with notables to raid for source examples:

http://en.wikipedia.org/wiki/List_of_game_engines#Free_and_open-source_engines
http://en.wikipedia.org/wiki/Delta3d
http://en.wikipedia.org/wiki/Genesis_Device
http://en.wikipedia.org/wiki/Id_Tech_4#History

The point to be taken from the above is that technology is a moving target, so we need the most granular modularity so that 80% of the code isn't thrown away on each move forward. The code needs to be written so infinitesimal portions can be replaced without breaking anything else. This is in general the "Holy Grail" of software and what is needed to break forward to a whole never level of computing worldwide, with more people involved in creating content. The compiler+language issues are critical as well.

I want to start a games programming "hands on" online classes at 1000s of internet cafes across the developing world and unleash this huge billion person idle youth creativity. In the process, we will unseat Microsoft (DirectX games big barrier to Linux adoption in developing world) and other barriers standing in our way of freedom and hopefully also inherently diminish the power of the current WallStreet banksters system. It is race of productivity versus the parasitic aspects of the Harlot system. The more productive people are, the more free they are, because they have less time to participate in the debt and drug economy. Also the entertainment of the world will be created and distributed de-centralized, breaking the mind control of mass media. Latest games are like directing a Hollywood movie, they are so realistic.

The problem of most people trying to suggest fighting the parasitic Harlot (fiat+drug) global economy is they advocate collective action which does not spawn from individually motivated action. For example, the suggestion to buy physical silver is one that creates no viral effects and near-term benefits-- the individual runs out of capital to buy more silver and waits helplessly for the end scenario of financial system crisis (precisely against the story of the Talents in the Bible). Ditto political action.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Multi-threading problem space validates My Theory of Everything?

Post  Shelby Fri Oct 23, 2009 2:52 pm


Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty My greatest writing ever, combines Biblical wisdom and technology

Post  Shelby Sun Oct 25, 2009 4:16 pm

Relates how computer programing paradigms fail for the same reason society's do-- they fail to respect the natural law about promises and trust in semantics over the natural law.

Includes a link to cognitive scientist Marvin Lee Minsky's new Emotion Machine:

http://lambda-the-ultimate.org/node/3637#comment-51736

Also linked from here, Whose clock is broken twice per day?

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Computer power increased million-trillion times in boomers lifetime

Post  Shelby Sun Oct 25, 2009 9:47 pm

http://www.defmacro.org/ramblings/fp.html

The first machine to solve ballistic tables was a Mark I built by IBM - it weighed five tons, had 750,000 parts and could do three operations per second.

Now desktop computers with a $100 3D graphics can do more than 1 trillion operations per second.

Imagine that in the lifetime of the typical baby boomer, cumulative computing power on earth has increased more than million-trillions fold (given millions of desktop computers in world now, versus only 1 Mark I in 1940s).

million-trillion has 18 zeros!

But put that in perspective to Avogradro's constant, it isn't even yet equivalent to the number of molecules in a gram of matter:

http://en.wikipedia.org/wiki/Avogadro_constant

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Computer Science Reading List

Post  Shelby Mon Oct 26, 2009 1:40 pm



Last edited by Shelby on Tue Nov 03, 2009 1:15 am; edited 8 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty re: Ultimate virus and ad free browsing (and more productive too)

Post  Shelby Thu Oct 29, 2009 3:44 pm

Shelby wrote:Get:

* Baseline Shield from eazsolution.com
* Adblock Plus Firefox addon, With EasyList+EasyPrivacy filters

The baseline makes sure even Flash LSO objects don't persist between baseline restores (i.e. reboot) and the AdBlock filters most of the stuff. Blocked 15 out of 40 resource hogs on Yahoo Finance! Yahoo Finance loads nearly instantly now, even over my slow connection.

Browsing is so much incredibly faster and no viruses can every persist past a reboot!

You really should switch to Firefox 3.x, then following addons:

* Adblock Plus, With EasyList+EasyPrivacy filters - filters most ads before they load, speeding up browsing, reduces errors/crashes, uncluttering the webpage, then you can restore them from "ASP" stop sign icon

* Flashblock - replaces every Flash video/animation even ads, with a button to click if you want to load&display them

* Split Browser - allows you to split the browser window (horiz or vertical) with a drag-able bar (vertical or horiz) so you can see 2 or more webpages simultaneously side-by-side, which is especially productive in a large 19% wide screen. Works with the multiple tabs (pages open) of Firefox.

* Image Zoom - I don't use this too much, but you can right click on images, then roll the mouse wheel to zoom in/out on them in fine increments.

* Multiple Tab Handler - enables group operations of multiple tabs (pages open), and it integrates well with Split Browser.

Also advisable to add the Rollback product, which is same as Baseline Shield, much better than Deep Freeze or Norton and others, because of the ability to recover even if OS won't start and the faster performance of restores.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Topologically efficient URLs and P2P network topologies

Post  Shelby Sun Nov 01, 2009 5:58 pm

See also my architectural comments about BitTorrent free loading and opportunity cost minimization.

This post continues from this one.

Something I wrote in email on Sat, April 26, 2008 3:13 am:

In short, the technological architectural problem must be solved in the context of the economic problem (see bold text below).

Deterministic vs Random P2P Topology
>From what I remember off the top of my head from my prior research, the
deterministic P2P topologies
assign
each peer a segment of the resource locator hash key
. Within a
segment, peers communicate with each other to make sure sufficient
redundancy is maintained. Deterministic topologies are more deterministic
in terms of performance, but anonymity & resistance to attack is weak.

The random (non-deterministic) P2P topologies (e.g. BitTorrent?)
randomly
poll peers (outside their local portion of the global DHT)
, then the
segmentation of the resource locator hash key builds statistically over
time, as requested resources statistically gravitate to peers closer to
their requests over time. Statistical topologies have stronger anonymity
and resistance to attack, but are
less
deterministic (more statistical) in terms of performance
.

It seems in both cases, we only need a resource locator hash key that is
statistically uniformly distributed? Thus taking the MD5 hash of a URL is
sufficient? Thus the URL is the superset of the URL, by applying MD5 when
P2P request is desired.

So as I wrote before dash, my design is already generalized enough and the
market demand will be created for the P2P storage. Then all someone has
to do is write a plugin for the browser which redirects a URL to an MD5
hash locator in a P2P topology. See the URL access and superset P2P
access (by taking the MD5 has of the URL) can operate in parallel. The
client can try either and take the first one that returns the resource.
There is no chicken & egg problem because URLs can build out the market,
then anyone can supplement with P2P parallelism. The P2P parallelism
provides the advantage of more robust performance and will allow users to
publish persistant data to the P2P network, thus avoiding the need to
maintain a URL location permanently. One could simply plugin existing
P2P networks like BitTorrent, so there is certainly no chicken or Egg
problem as the networks already exist.

This above successfully addresses and deals with the following prior posts:

shelby;34847 wrote:dash, I have an engineering conceptual question that I
would like to ask you to help me solve.

URL (Universal Resource Locator) references a specific host (IP via DNS),
then a path within that host to a resource.

What would be the ideal structure for a resource locator which references
a resource that is distributed across a P2P network? I understand a MD5
hash can uniquely identify a resource, but it contains no optimization
hints on locating the resource. How would a transient P2P storage work
and how should the resource locator thus be optimally encoded?

I say "transient", because the most efficient energy design would be one
that cache's resources at clients that are on, with sufficient redundancy
or even deterministic redundancy, that there is always at least a few
machines on the network which are on and have a copy of (chunks of) the
resource. Rather than requiring the distribution of new web appliances,
the market will be most efficient if leverages existing PCs via new
software. Software paradigms are always more energy efficient (spread
faster), than adding hardware paradigm shifts. Remember that bandwidth is
not free, so we must increase efficiency by using underutilized (wasted)
resources.

I am currently using URLs in my design, and I want to contemplate how to
decentralize the resource locator. I think this is last key design
concept I need to conquer. The market will create the demand. This
superior resource locator should be a superset of URL, so URLs can be used
to build the market size, then P2P can be phased automatically via a more
efficient program that gets distributed as this design spreads out.

Any way, what should the topology of the resource locator be? I think we
should look at bittorrent?

shelby;33839 wrote:dash my 3rd form of storage will be creating a market
for such generic (low cost), distributed (P2P caching) network storage
appliances.

shelby;33826 wrote:dash, you don't understand. I said the semantics are
the storage-- the whole concept of monopoly of storage disappears... It
is such an extreme paradigm shift, that you are not seeing it. I do not
require clients to be on. I do not want to explain it to you further at
this time. Thanks for your feedback.


The remaining problem for P2P is the
prior
post I made about monetizing the sharing incentive
(which was
spawned conceptually from probably one of the most important & wisest
concepts Jason Hommel ever said to me "you can not make something free,
which is not free"). I will continue to ponder this one (as
Bill Cohen
the creator of BitTorrent does
), maybe others can offer their input?

Apparently
BitTorrent uses the self-motivation of the download client for improved
performance as the monetization and motivation for sharing upload
bandwidth
. That problem is independent of the design decision I
need to make now for my project described in this thread for
de-centralizing APIs via semantics, as long as I support URLs then I am
fully generalized to any future P2P topology, because a URL can be used as
a hash key and all P2P topologies operate on hash keys.







====================
Economic challenge of P2P is monetizing the sharing incentive of upload
bandwidth

---------------------------- Original Message ----------------------------
Subject: Canceling the open protocol P2P project
From: "Shelby Moore"
Date: Sat, January 6, 2007 10:03 pm
To: "Jason_Hommel"
--------------------------------------------------------------------------

I am leaving open the possibility of a non-open P2P file streaming method
as discussed further below.

There is no mainstream, legal demand for the open protocol P2P file
delivery network. The robustness of P2P is not something in current
demand, as the advertising and subscription revenue models are able to
fulfill 80% (measured economically, i.e. who cares about asia when they
are economically irrelevant on the internet) of the robustness needs.

The one demand case I can think of is where someone wants to broadcast
video (or other large file type) where bandwidth costs are excessive to
the business model of the site, but doesn't want the viewer (downloader)
to have to stop and pay (or wants them to pay less than the bandwidth
cost). This was the original case that lead me to P2P idea. An open P2P
protocol really doesn't help much, because we have make users pay for each
other's upload bandwidth, else free loading could destroy the network.
Try to monetize this via a form of micropayments, with semi-automated
transactional cost, seems cumbersome for this instant gratification demand
case.

Originally I was envisioning broadcasting internet TV for example.

I am trying to think if there is any way to prevent free loading in less
open P2P protocol, so that upload bandwidth could be shared reciprocally
without needing to introduce monetization. The key seems to be the use of
source website has a hub, to police the sharing of bandwidth among the
viewer's peers, but the problem is that we have no way to verify that the
peer actually sent the bandwidth, as we can't trust the receiving peer
hasn't lied about not receiving the bandwidth. The receiving peer has no
incentive to lie (the bandwidth trade is free and receiving peer needs the
data), except to be malicious. The sending peer has an incentive to lie
in order to conserve upload bandwidth (it already has the data). And I
now realize that we have same problem in monetization case, how can we
know which of the two peers is a liar, when they disagree about whether
bandwidth was transferred between them? I do not remember how I planned
to solve this, if ever I had a solution in mind. I think I had planned to
throttle down those peers which were consistently failing and randomly
rotate peer matchups, thus liars would statistically throttle out of the
network. I could apply this same technique to the policed, free bandwidth
trading case, where liars (or those with saturated bandwidth) lose
download speed.

But this policed idea is only applicable to peers which are viewing
(downloading) simultaneously, as once the download has completed, then the
the peer has what it wants and could stop cooperating. The only stick we
have to control the peer in this free, policed case, is the peer's desire
to get the rest of the file. So this would only be viable when there are
many simultaneous viewers (downloads).

Since this would be real-time (simultaneous), then the another potential
problem is how to fan-out (fork out, think of a tree with the source as
the trunk and peers as branches) fast enough that there isn't a huge
latency between the trunk source and the outer branch peers. The only way
to solve is to have trunk proximate peers send more uploads than they
download, so that fan-out grows geometrically efficient. However, this
places a larger bandwidth cost to the viewer, which may not be preferrable
to paying for the content. Latency is probably not a big issue for file
downloads, only for viewing live video.

This policed, free P2P would probably be best implemented by installing a
webserver on the viewer's computer which would do all the P2P
communication, and thus once installed, would be seamless to a standard
webbrowsing experience. For example, user clicks a link to download a
file, this link will refer to "localhost..." which will invoke the local
webserver to download the file via policed P2P protocol and then pass it
through to the webbrowser. So from user's perspective, it is just like
clicking a normal link. The only extra thing the user needs to do, is to
install the P2P application once (which will install a local webserver on
their computer and configure it with P2P scripts, probably written using
PHP). This seems workable.

The current testing I am doing with XAMAPP local webserver


Last edited by Shelby on Mon Nov 02, 2009 4:29 am; edited 2 times in total

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Base classes can be _ELIMINATED_ with interfaces

Post  Shelby Mon Nov 02, 2009 3:28 am

I was correct before, except I conflated the word "extended" with "eliminated" in my mind:

http://lambda-the-ultimate.org/node/1277#comment-51723

The most robust solution to Tim Sweeney's problem is to rethink what a "class" should be:

http://www.haskell.org/pipermail/haskell-cafe/2009-November/068432.html

Theorems included at above link.

This is my final email to you all on the matter of OOP sets.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Essence of Functional Programming for Imperative Programmers

Post  Shelby Wed Nov 04, 2009 7:04 am

New concise guide I am creating:

http://www.coolpage.com/commentary/economic/shelby/Functional_Programming_Essence.html

I was able to condense Category theory and implementation of Monads to one screen:

http://www.coolpage.com/commentary/economic/shelby/Functional_Programming_Essence.html#Monads

Overall, I think I have a unique method for comparing and condensing the explanation of the transistion from imperative to pure functional. What you think?

It is a work-in-progress, so corrections, feedback, and flames are welcome. I will do the OOP section next and incorporate the explanations from these posts:

http://www.haskell.org/pipermail/haskell-cafe/2009-November/068440.html
(data vs. Module)
http://www.haskell.org/pipermail/haskell-cafe/2009-November/068432.html
(interface vs. virtual)

P.S. The link will not change if you want to link to it now. If you mirror it, please update your mirrors periodically. There is no copyright claimed, I don't believe in copyrights any more. I intend to publish everything as PUBLIC DOMAIN (i.e. no license at all, because licenses impact composability). If I want to charge, I will put functionality behind an unpublished interface (i.e. Module).

P.P.S. I only started learning functional programming about a week ago.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty How to block an IP at low level on Windows XP (probably works in Vista)

Post  Shelby Thu Nov 12, 2009 9:50 am

> My DLink router is accessible via:
> http://192.168.0.1
>
> I tried setting an admin password, but doesn't seem
> to block access. How can I block that IP address at low level in Windows
> (i.e. isn't there a file in SYSTEM folder where you can resign IPs or
> something)? I can then block access to that file after I edit it by
> creating a new Windows user login that doesn't have access to that file.

%windir%\system32\drivers\etc\hosts does not work. Thanks for the tip to use IPSec in policy editor.

Here is exactly how to do it (already tested in IE and FF):

1. Start -> Run secpol.msc
2. In left pane, right-click "IP Security Policies On Local Computer"
3. Choose All Tasks -> Import Policies...
4. Select the attached file
5. In right pane, right-click "Block DLink Router", choose Assign

Here is how I created that policies configuration:

1. Start -> Run secpol.msc
2. In left pane, right-click "IP Security Policies On Local Computer"
3. Choose Manage IP filter lists and filter actions...
4. In dialog box under "Manage IP Filters" tab, click Add button at bottom
4a. Type "DLink Router" in Name, and "http://192.168.0.1" in Description
4b. Check "Use Add Wizard", click Add button
4c. In Add Wizard, Source = "My IP", Dest = "192.168.0.1"
4d. In Add Wizard, Protocol = "TCP" (else DNS blocked for all domains)
5. In dialog box under "Manage IP Filters" tab, click Add button at bottom
5a. Type "Block" in Name, choose Block radio button
6. Click Ok to dismiss dialog box
7. In left pane, again right-click "IP Security Policies On Local Computer"
8. Choose Create IP Security Policy...
9. Type "Block DLink Router" for Name and description as 4a, uncheck "Active default response rule"
10. In dialog box after Wizard (editing the new rule), do not check the rule just created by that Wizard, but add a new rule and select the IP filter and action created above, and check it to active it.
11. In right pane, right-click "Block DLink Router", choose Assign
12. In left pane, right-click "IP Security Policies On Local Computer"
13. Choose All Tasks -> Import Policies...
14. Save to the attached file

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Accepted my Idea for Fixing "space leaks" in Pure Functional Programming

Post  Shelby Thu Nov 12, 2009 10:00 am

One of the key creators of Haskell, accepted my idea and marked it as a "feature request". I was very happy to see that my idea was not shot down too early. If you read the sub-links on that page, you will see I think it is one of the key changes needed to move pure functional programming to the mainstream, and as mainstream I assert has potential to leap computer utility forward by an order-of-magnitude (perhaps similar in magnitude to what the internet did to computers in 1995).

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Who will be the 1st to have 1 billion users?

Post  Shelby Tue Nov 17, 2009 9:04 am

I have plan to usurp Twitter by sticking a proxy between them and the developing world:

http://www.techcrunch.com/2009/07/16/twitters-internal-strategy-laid-bare-to-be-the-pulse-of-the-planet/

Fairwell to the gold community. I hope to do a lot of good with education as part of my plan.


Keep at least 10% of your net worth in gold&silver, as the we move 3+ billion into the workforce, with some similarities (and some big differences, increasing taxes!) to when we moved 300+ million boomers into workforce in 1970s.

==========================
Crumbling wall of censorship

Understand how the world is being changed by peer-to-peer (P2P) spread of information, and you will get some clue as to why I think I tapping into something HUGE:

http://www.lewrockwell.com/orig10/lindorff2.1.1.html

One thing I learned from living and working as a journalist and journalism teacher in China back in the 1990s is that the Chinese people, with their long experience of living in a totalitarian dictatorship in which all media are owned and tightly controlled by the state and the ruling Communist Party, are acutely aware that they are being lied to and that the truth is being hidden from them. Accordingly, they have learned to read between the lines, to pick up subtle hints in news articles which honest journalists have learned how to slip into their carefully controlled reports. They have also developed a sophisticated private system of person-to-person reporting called xiaodao xiaoxi or, literally, “back-alley news.” This system used to be word-of-mouth between neighbors and friends. As telephones became ubiquitous, it was done by phone, allowing transmission over long distances quickly. Now there is the internet, which, while it is systematically controlled via what has become known as China’s “Great Firewall” – effectively all of China is like a vast corporate “intranet” which blocks access to outside websites – still allows the flow of email. This is nearly impossible to monitor, particularly when the messages are not bulk mailed to large numbers of addressees.

So in China, reports of corruption, of local rebellions or strikes, of internal struggles within the government or party, or of important news about the outside world that the government wants to keep at bay, manage to circulate widely inside China despite a huge state censorship apparatus.

This alternative highly-personal news network works because the Chinese people know they are being lied to and kept in the dark, and they want to break through that official shroud of secrecy and control.

In the US, in contrast, we have a public that for the most part is blissfully unaware of the extent to which our news is being censored, filtered and controlled. Like the President (who knows better), we boast of our “free press,” and our open society, and indeed, as a journalist, I am free to write what I want to write.


=======================
Information can not be owned:

http://mises.org/daily/3864

This is why I think I have the correct business model. I am not going to explain it in detail, because I want "first to market" advantage, but my plans should become obvious soon, if I am successful.

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Better Answers to Google's Interview Questions

Post  Shelby Thu Nov 19, 2009 10:02 pm

Apologies if this is quite egotistical, but it is just to demonstrate that most IQ tests are in fact wrong.

Better answers (from me):

http://www.businessinsider.com/answers-to-15-google-interview-questions-that-will-make-you-feel-stupid-2009-11#why-are-manhole-covers-round-5

Man hole covers are round mainly so they can't fall in the hole, but assuming they are quite heavy this also enables them to be rolled. However this could be a disadvantage if risk of theft is a major factor, in which case the ideal shape is an equilateral triangle or the letter 8, or more generally any N shapes attached together where the maximum of the minimum grouped dimension (i.e. the "width") is greater than maximum dimension of any of the individual shapes.

http://www.businessinsider.com/answers-to-15-google-interview-questions-that-will-make-you-feel-stupid-2009-11#you-are-given-2-eggs-13

This is a minimum path optimization question, i.e. finding the shortest distance one needs to travel. The more general answer (if you really want to impress Google), is that if you are given M eggs to break for N floors, then the answer will always be that maximum drops = ((Mth root of N) minus 1) times M = N (inv key)(x^y key) M (equals key) (minus key) 1 (equals key) (multiply key) M. For 2 eggs the 2nd root means square root, for 3 eggs cube root. The reason that the Mth root or N is the shortest path is because it gives you the set with M members that are the smallest numbers that can be multiplied together that result in N. For example, with 1000 floors and 3 eggs, you get cube root of 1000 is 10, so first you drop in 1000/10 = 100 increments, so that is at most 10 drops. Then you drop in increments of 10, then finally increments of 1.

However, note that someone actually deduced a shorter path, and another person derived the generalized equation for 2 eggs and N floors, and another alluded to the M and N generalized form.

http://www.businessinsider.com/answers-to-15-google-interview-questions-that-will-make-you-feel-stupid-2009-11#how-many-times-a-day-does-a-clocks-hands-overlap-7

The answer given was incorrect, because only included the hour and minute hands, and did not account for a military clock. The more general and correct answer that for the minute and hour hands, then answer = 24 minus (24 divided by number of hours on the clock). For example, for a military clock (13th - 23rd hour on the clock), then the answer = 24 - (24/24) = 23. Also add the minute and second hands, answer = 24 x 60 - 1. Also add the hour and second hands, answer = 24 x 60 - 1.

http://www.businessinsider.com/answers-to-15-google-interview-questions-that-will-make-you-feel-stupid-2009-11#explain-the-significance-of-dead-beef-8

The correct answer is that it depends on the context of the field of inquiry. To jump to the conclusion that "bad beef" is the magic hexadecimal file marker (I haven't seen that for 20 years and certainly I don't have the long-term sparse memory recall to have clued in on that), is myopically presumptuous without further qualification. Btw, this is the sort of question that I was marked incorrect on IQ tests, but for which I feel the IQ test examiner was wrong.

http://www.businessinsider.com/answers-to-15-google-interview-questions-that-will-make-you-feel-stupid-2009-11/you-need-to-check-that-your-friend-bob-has-your-correct-phone-number-10

The answer given was incorrect, and someone else derived the same correct answer that I thought of, which is the phone number is the encryption key, e.g. send the MD5 hash of the phone number, ask Bob to reply whether it matches.

http://www.businessinsider.com/answers-to-15-google-interview-questions-that-will-make-you-feel-stupid-2009-11/in-a-country-in-which-people-only-want-boys-3

The answer given was incorrect, and someone point out that interestingly a preference for boys will actually lead to more girls in the population! Those with slightly higher probability of reproducing girls, will produce more girls until they produce a boy, but those with slightly higher probability of producing boys, will stop producing boys on their first one. Interesting Biblical rule, that you can not go against God's plan without reaping what you sow. Someone noted the humorous irony.

http://www.businessinsider.com/answers-to-15-google-interview-questions-that-will-make-you-feel-stupid-2009-11#you-are-shrunk-to-the-height-of-a-nickel-15

I was impressed with someone's answer to the blender question. I was thinking that since our mass was less than a piece of paper, that we had no chance of staying in a set position, nor controlling direction blown, unless we could possibly avoid the air vortex and centrifugal force in the center of the blade (but the spinning would probably kill us), so I was thinking to go to the base of the blade axis and find a rubber gasket to bite and make handles to hold on to. In theory, by the same cube root law, our strength to mass ratio would be orders-of-magnitude increased.

http://www.businessinsider.com/answers-to-15-google-interview-questions-that-will-make-you-feel-stupid-2009-11#youre-the-captain-of-a-pirate-ship-11

The answer given above is incorrect. If you offer all the booty to 51% of the crew, one of them might get motivated to vote against your proposal because they have nothing to lose if your proposal wins (and they might want to try their luck at another proposal that offers them more). Contemplate that deeply! Whereas, the correct answer is to propose that anyone who votes NO will not share in the booty. TADA!

Again this is Biblical natural law at play, wherein you can't conspire against society without reaping what you sow. I also demonstrates why socialism spreads in a democracy-- the group has to force inclusion through equality, but we know equality is a world of no-contrast (nothing with exist).

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Internet Revolution is alive and will continue to change the world more than you think

Post  Shelby Sun Nov 29, 2009 12:01 am

Read this:

https://goldwetrust.forumotion.com/economics-f4/inflation-or-deflation-t9-285.htm#2391

Then this:

http://www.lewrockwell.com/north/north786.html

And if they try to shut down the internet, we will simply go around them:

http://www.lewrockwell.com/orig9/green-p3.1.1.html

Imagine if my internet cafes are separated by a distance of less than 6.2 miles each, I can simply network them using a high gain, directional antenna and a standard wireless router (if I had 1000 net cafes networked together, with a huge cache of the internet, many of the services would still be functional):

http://www.amazon.com/2-4GHz-Square-Parabolic-Antenna-24dBi/dp/B000V0ONTI

Shelby
Admin

Posts : 3107
Join date : 2008-10-21

http://GoldWeTrust.com

Back to top Go down

Computers: Empty Re: Computers:

Post  Sponsored content


Sponsored content


Back to top Go down

Page 1 of 11 1, 2, 3 ... 9, 10, 11  Next

Back to top


 
Permissions in this forum:
You cannot reply to topics in this forum