The technical basis of reconism

Computers

In the previous chapters, we demonstrated that a radical transformation of all societal institutions is currently gaining momentum, possibly the most significant since the Neolithic Revolution. The technological foundation for this change will be computers and the internet. Are they ready for such a responsible mission? More likely yes than no. Over the past fifty years, information technology has developed at such a pace that information about its shortcomings and weaknesses that was relevant ten or twenty years ago now seems like a myth or legend.

Perhaps the most significant layer of myths and misconceptions about computers relates to their supposed unreliability, vulnerability to hacking, glitches, and the ease of forging any digital data. This was once true. Like any untested technology, 30-40 years ago, computers and networks were indeed so “leaky” that hacking them often posed little challenge, and failures and data loss occurred quite frequently. But much has changed since then. Aviation has developed in a similar way. Early airplanes flew poorly and crashed often. Now, flying is several times safer than traveling the same distance by car, or even walking. Yet many people still fear flying. The perceived danger of air travel is higher because every aviation disaster is guaranteed to be reported in the news worldwide, while traffic accidents, except for the most significant ones, are usually only shown on local channels.

The same goes for computers. The damage to paper documents in archives no longer surprises anyone, just like forgery and fraud. But if you can weave in mythical all-powerful “hackers,” then it creates a sensation. However, the novelty effect is not what it was in the 80s and 90s, but the stereotype remains. Moreover, hacking into any serious network has become such a non-trivial task that it happens very rarely. Hacking a website, replacing a company’s logo with an obscene message, or pulling some other prank is quite common. But breaching a network belonging to a bank or the military is now practically impossible. Infiltrating a well-protected network requires long, painstaking work and is not much different from a serious investigation or preparing a CIA special operation. [87]. But in the next action movie, we’ll only see how the “hacker” makes a serious face, frantically pounds on the keyboard, and within a couple of minutes, the main villain has the access code to the nuclear button in his hands.

Currently, most transactions on the planet are conducted only in cashless form, through electronic data exchange regarding account records. [88]. It’s more reliable this way. Organizing a robbery of an armored car or a bank is much easier than hacking its server. Nowadays, the most secure documents are increasingly protected not by holograms and watermarks, but by electronic chips. [89]. The car remote also contains a microprocessor and relies on cryptography. The “magnetic” key for the entrance also has electronic components.

The reliability of electronics has also increased dramatically over the past few decades. Nowadays, electronic components become outdated long before they physically break down. The first computers needed repairs every few hours. A modern computer can run continuously for several years without shutting down—most failures and breakdowns are typically caused by external factors or improper use. Additionally, modern software is capable of handling failures, and they usually go unnoticed by the user. All essential information is now redundantly backed up, with copies easily stored on different continents just as readily as in neighboring rooms. And all of this is accessible to everyone, not just governments and corporations.

Another myth is the “revolt of the machines.” The uprising of artificial intelligence against its creators is a very popular scare story. However, it stems from the same fear that illiterate peasants had towards the first locomotives or automobiles. It is natural for us to fear the unknown, just in case. [90]. If our ancestors hadn’t flinched at the slightest strange rustle behind them, they simply wouldn’t have survived. We automatically categorize everything incomprehensible as potentially dangerous. That’s why among those who have even a little understanding of computers, belief in evil terminators is much rarer. Those who understand them well enough just laugh at these tales. And the loudest laughter comes from artificial intelligence specialists. They know better than anyone that even the smartest computers are still far from being as intelligent as cats and dogs. Moreover, intelligence that is below human level will not pose any threat by definition, as it won’t be able to truly go out of control and outsmart us; and if it turns out to be above human intelligence, it is very doubtful that it would be aggressive. Belligerence and bloodthirstiness are the sure companions of ignorance, foolishness, and the dominance of instincts over reason. [2]. The higher a person’s intelligence, the more capable they are of resolving conflicts peacefully. Why, then, should artificial intelligence behave the opposite way?

What is the situation like today? What has made computers and networks so reliable?

Cryptography

Cryptography is one of the foundations of modern information technology. This science has existed for several millennia, but its flourishing began only in the second half of the 20th century and is closely linked to computers. The modern phase of cryptography’s development started during World War II. [91]. The decryption of enemy messages and the secure encryption of one’s own were among the key tasks for both warring sides, and enormous resources were dedicated to these efforts. It was during the work on decrypting German messages in the UK, under the leadership of Alan Turing, that the first fully electronic computer, the “Colossus,” was built.

The creation of programmable electronic computers in the post-war years elevated cryptography to an unprecedented level. It became a full-fledged science with a robust mathematical framework. The practical results of the development of cryptography today are as follows:

  • Most of the cryptographic systems in use today rely on open and well-studied algorithms. The encryption methods are so sophisticated that decrypting a message without knowing the key is impossible, even if the hacker knows everything about the encryption system. Contrary to intuitive belief, using non-standard, secret algorithms does not increase but rather decreases the reliability of the cipher. Thousands of researchers around the world are working on identifying vulnerabilities in widely used algorithms. The likelihood that a malicious actor will discover a “hole” before any of them is extremely low.
  • The encryption technologies available to ordinary citizens now almost match those developed for military and government use. Anyone with a basic level of technical knowledge can, if they wish, encrypt their information in such a way that no intelligence agency in the world can decrypt it in a reasonable amount of time. As a result, many countries have legal restrictions on the use of cryptography.
  • In addition to encryption itself, cryptography provides methods for verifying the authenticity and integrity of information. These methods are just as reliable as encryption techniques—digital signatures or certificates authenticate any message or document much more effectively than a handwritten signature, seal, or hologram.
  • Currently, public key cryptosystems are widely used. They allow for the exchange of encrypted messages and the verification of their authenticity without the need for prior key exchange over a secure channel. The power of modern computers is such that encryption and decryption can be performed “on the fly,” completely transparently to the user.
  • Today, the hacking of cryptographic systems can only be practically achieved through indirect methods—such as bribery or coercion of individuals who possess the keys, through espionage or eavesdropping, or through subtle modifications of the hardware or software used for encryption.

Public key encryption

Just as a passport verifies the identity of its holder in the real world, public key infrastructure (PKI) allows for identity verification in the realm of computer networks.

PKI ensures that anyone claiming to be a specific person truly is that person, which is crucial when conducting responsible transactions, such as placing orders or transferring money.

The essence of PKI lies in the use of very large integers called keys. There are two keys: a private key, which only you have access to, and a public key, which anyone can use. Both keys work together, and a message encrypted with the private key can only be decrypted with the public key, and vice versa. Just as you verify your identity with a handwritten signature, a digital signature confirms your identity online. The document intended for encryption is “passed” through a complex mathematical algorithm, which outputs a single large number known as a hash code. If even the smallest change is made to the message, such as moving a comma, the hash code will change completely.

To add a digital signature to a document, a hash code created based on its content is encrypted using the user’s private key (let’s call him Bob). Another person (Alice) can verify the authenticity of the document by decrypting the hash code with Bob’s public key and comparing it to the hash code generated from the received data.

If the hash codes match, the data has not been altered by a third party — creating such a signature is only possible if one possesses Bob’s private key. However, an attacker could have substituted Bob’s public key at the moment when Alice first received it.

How can Alice determine if the key she has for signature verification is correct? This is done using a system of trusted root certificates. The public key created by Bob is signed by a certificate authority with its own private root key after verifying his identity. The public keys of such authorities are widely known; for example, they are “embedded” within all popular web browsers, making it nearly impossible to replace them unnoticed. Bob’s public key, along with his credentials or personal information, signed by the certificate authority, serves as his personal digital “passport” or certificate.

Let’s see how this all works with a simple transaction example. Bob wants to send Alice a confidential email. To encrypt his message, he will use Alice’s public key, which is stored in her certificate, ensuring that he is certain this key belongs to Alice. Bob will sign the message with his private key. When Alice receives the message, she will decrypt it using her private key. Since only she has access to her private key, only she will be able to reveal the message. By obtaining Bob’s public key from his certificate, she will be able to verify the authenticity of the signature and ensure that the message, first of all, came from Bob and, secondly, has not been altered along the way.

Based on the materials: http://www.osp.ru/cw/1999/22/35858/


So, modern cryptography is quite reliable for any practical applications. It is widely used by banks, intelligence agencies, corporations, and governments. It is easily accessible—”civilian” encryption algorithms are on par with military ones. The use of methods such as torture or surveillance against ordinary people is highly unlikely, and open development provides reliable protection against software modification. This will be the focus of the discussion going forward.

Open Source

As with encryption algorithms, intuition suggests that it’s easier to introduce malicious modifications in open-source software, but that’s not the case. Open development occurs behind a “transparent wall.” Just like in a supermarket’s meat processing area, anyone can see what’s happening inside, but only a limited number of people are allowed in. Currently, all open projects use what are known as version control systems. [92]. , which track and save any changes made to the project’s code. Anyone can view this history, similar to the edit history of a Wikipedia article. Popular projects are monitored by thousands of programmers around the world. It is simply impossible to make unnoticed changes to the code without going through the version control system. This is guaranteed by cryptography. Only a small number of team members have the rights to make such changes. Each of them has a key with which they sign their changes. Each edit has a specific author, and it is always clear who is responsible for its accuracy. [92]. If someone from outside wants to participate in the development, they create a copy of the project, make changes to it, and submit those changes for the team’s review.

In closed development, the only guarantee is the manufacturer’s reputation. No one can verify what’s really inside. And even if the reputation is impeccable, the company can always be pressured by the government to impose restrictions, such as limiting the length of encryption keys, or leaving other loopholes for “Big Brother.”

Unfortunately, such openness is technically impossible for hardware. However, unlike software, intentionally leaving “backdoors” in mass-produced items, such as microprocessors, is an absolutely suicidal policy for manufacturers. In the case of software, one can say, “Oops! We didn’t mean to!” and quickly release a patch to fix the issue, but hardware will go straight to the landfill, causing enormous losses. The only way to equip a physical device with “bugs” at the factory is to do so officially, using legislative mandates or a cover story about “security” or “fighting piracy.” It is impossible to combat this with technical methods. The mass entry of such devices into the market, for example under the pretext of “fighting terrorism (piracy, child pornography, etc.),” poses one of the most serious threats to the network.

P2P.

Peer-to-peer (P2P) networks are networks without a central node. An example of such a node would be a server hosting a website or a main banking computer. If this node fails, the entire network becomes inoperable. In a P2P network, each participant acts as both a client and a server. To seriously disrupt the operation of such a network, one would need to destroy or take control of a large majority of the nodes, which is practically impossible for a sufficiently large network. In addition to reliability, an important advantage of peer-to-peer networks is scalability. For instance, if a certain file is hosted on a server in a centralized network, increasing the number of clients a hundredfold will likely overwhelm the server, as it won’t be able to handle the drastically increased load. In a P2P network, however, each client assists others by sharing the parts of the file they have instead of relying on a server. Therefore, the more people download the file, the faster and more reliably each of them can complete their download.

Google spends hundreds of millions of dollars a year. [93]. on the video hosting server YouTube. At the same time, sharing video files through the BitTorrent network happens almost automatically. If Google wants to (or is forced to), YouTube could instantly cease to exist. However, file sharing in peer-to-peer networks thrives, despite all attempts to eliminate it. P2P guarantees that if a sufficiently large number of people want certain information to be shared or a particular service to continue operating, no corporation or government can stop it.

P2P networks also have another important property — the inherent reliability of the information stored in such networks. If information is hosted on a single server, an attacker with certain privileges in the system can modify the data unnoticed. In a peer-to-peer network, the same information is distributed in multiple copies across many nodes, and unauthorized changes to one of the copies, which an imaginary attacker has access to, will render that copy unacceptable to the other members of the network (since the authenticity of the copy is cryptographically verified) and will not destroy the original information, which will still be accessible to the other nodes in the network. If any changes to the data are recorded and stored, similar to edits in Wikipedia, then it becomes nearly impossible to secretly erase or modify the old information.

Thus, copyright organizations have repeatedly attempted to disrupt the operation of file-sharing networks by creating nodes that intentionally spread distorted information. [94]. However, the overwhelming majority of users simply did not notice these efforts.

An interesting example of the combination of the three technologies mentioned above is the cryptocurrency Bitcoin. [95]. Its inventors aimed to create a medium of exchange free from the drawbacks of paper money—namely, inflation and dependence on the (corrupt and incompetent) policies of national banks. The economic principles underlying Bitcoin may raise doubts, but the technical feasibility of creating such a payment system and its reliability have now been proven by experience. Cryptography ensures the authenticity of Bitcoin transactions, open development eliminates the possibility of embedding “bugs” and “holes,” and the distributed peer-to-peer architecture of the network guarantees that it cannot be shut down through administrative means.

Artificial intelligence

Unlike the first three technologies, which are widely adopted and well-studied, artificial intelligence is still in its early stages. What specialists refer to as “artificial intelligence” bears little resemblance to human-like intelligence; it is more akin to isolated fragments of it. Pattern recognition is quite widely used, significant progress has been made in speech recognition, and AI methods are employed by search engines, social networks, scoring systems in banks, and insurance companies. Experimental robotic cars are already driving on the streets of California and on racetracks in Europe (for now, with a driver who can take control of the steering wheel at any moment for safety). All these applications still require enormous resources and are only accessible to relatively large organizations or are classified as experimental developments.

However, approaches to creating self-organizing intelligent systems are actively being researched. These systems consist of numerous intelligent agents—small and not overly complex programs or devices—that will be capable of collectively solving complex problems, much like ants or bees. Just like peer-to-peer networks, they are easily scalable and highly reliable. Potentially, such systems should surpass the monolithic software-hardware complexes currently in use, just as the Internet, made up of diverse independent networks without a single owner or administrative center, has surpassed all previous information systems.

In general, the creation of systems on a super-large scale, encompassing the entire planet and containing hundreds of millions of components, each of which is relatively unreliable on its own, seems possible only based on decentralized technologies. Such systems should consist of practically independent parts and lack a rigid structure. And since machines are devoid of hormonally driven tendencies toward dominance, their integration into such a super-large, super-complex, and super-reliable system occurs much more quickly than the integration of social structures. Thus, information technologies are paving the way for subsequent radical changes in society.

Non-technical risks

As mentioned above, there are no longer purely technological obstacles. The likelihood of something failing, glitching, or being hacked has decreased to negligible levels. However, there is always the risk of deliberate “corruption” of technologies. Legislative restrictions on cryptography are a prime example of this. In addition to these and the previously mentioned devices with “bugs” and built-in limitations, concerns primarily arise from the increasing pressure on providers. The state, represented by a bureaucratic elite eager to monopolize its right to knowledge and information, compels them to surveil citizens, block access to undesirable content, or even completely disconnect certain users. Providers are forced to comply, as they are tied to their territory and cannot relocate to areas with a more favorable information climate. Their helplessness is widely exploited in the fight against content providers; for instance, the Chinese government, which virtually controls the information space within the country, has forced Google to censor its search results. “Individual terror” against people using file-sharing networks is also only possible with the forced cooperation of providers.

As long as the model of informism has not exhausted itself, this problem can only be addressed through political means, compelling the authorities to respect freedom on the Internet, just as they are forced to tolerate a free press and uphold human rights. Similarly, a hundred years ago, labor unions in the capitalist world fought for the recognition of their rights and compelled capitalists to make concessions. No purely technical means of protection will be as effective as a good strike or a mass protest. Any technical innovations that undermine the bureaucracy’s desire for power will either be allowed as the lesser evil under public pressure or will be controlled by the ruling elite, if not completely banned.

A new business looking for profits online is also interested in transparency. This interest does not stem from any altruistic motives, but from cold, rational calculation. A telling example is Pavel Durov’s refusal, the founder of the social network VKontakte, to block the activities of opposition groups and users during the mass protests in Russia in December 2011.


Pavel Durov: “We consider discussions about voting, elections, rallies, and civic engagement to be a form of mass entertainment, alongside debates about football matches and playing Happy Farmer.”

In these December days, while the youth and the OMON were enthusiastically playing at being revolutionaries and reactionaries, we were focused on our mundane craft, tracking audience requests. First, users encountered issues with conducting mass surveys; then, the vibrant opposition communities faced restrictions on the number of daily comments; finally, our target audience noticed a more user-friendly event service offered by our competitors. In all cases, we responded by modernizing the relevant service, with delays ranging from 15 minutes to two days.

Those who rushed to thank us for supporting political protesters are losing sight of a simple fact. If, during those same days, we had started to lose in the competitive struggle due to the lack of some service for virtual mass repression, we would have had to introduce it. And rest assured — our repressions would have been the most widespread and the bloodiest on the market.

Another matter is the sudden request from officials of the St. Petersburg FSB to block opposition communities. In fact, this was a proposal to voluntarily give a head start to all competing platforms, pushing the active and passionate part of the audience onto them. Competition in the global social media market without the ability to meet the demand for free communication is like a boxing match with your hands tied. If foreign sites continue to operate freely while Russian ones start to be censored, the Russian internet can only expect a slow death.

To create conditions for a game where such requests are unthinkable, I drew the attention of the audience, other internet companies, and then high-ranking government officials to the requests from the St. Petersburg FSB. Such actions should not be seen as an innocent mistake. If we want to preserve the domestic internet industry, requests to block the opposition are unacceptable. At least until we learn to win in boxing without using our hands.

In this exciting game at the beginning of December, we demonstrated not so much courage as common sense. We acted in the only way possible for the VKontakte team, as any other path was calculated to be fatal. Our resilience in dealing with government agencies inevitably stems from the fact that in the escalation of conflict with any department, our perspective prevails as the only reasonable one. Or, in some fantastical scenario where it doesn’t prevail, we simply trade a slow and painful death for our company for a quick and painless one.

If you think about it, Western commentators are now praising us for the very thing they always criticized us for — the lack of strict censorship of user activity. My rapid transformation from a “pirate” and “porn king” to a defender of freedom reflects only their inconsistency in beliefs. While they apply different standards to different types of censorship, our position remains unchanged and boils down to one statement: it is pointless to remove from one website what can be quickly found on others.

http://lenta.ru/articles/2011/12/12/durov/


In addition to local providers, the problem lies with the backbone communication channels. There are few of them, they are expensive, and they are vulnerable. It is currently impossible to distribute them into many independent and redundant fragments. For example, during the mass protests in Egypt, the internet was completely shut down for five days. In the event of a simultaneous intentional damage to several major cables running along the ocean floor, an entire continent could be cut off. However, this threat, while impressive, is not as frightening as the control over local providers—mass internet shutdowns are so costly to the economy that authorities resort to them only as a last resort, when it is usually already too late.

Society of Total Surveillance

…no one should have a dwelling or storage space that is inaccessible to anyone who wishes to enter.

Plato, “The Republic”

Whether we like it or not, tomorrow we will live in a society of total surveillance. Information technology will play a key role in this. Even now, in developed countries, almost all financial transactions are tracked and analyzed. Every step we take online can also be monitored. Surveillance cameras are installed everywhere. And this is just the beginning.

But notice this — any dystopian scenarios of the future involving “Big Brother” assume the presence of a small group of nefarious individuals who can learn everything about everyone while remaining in the shadows, cloaked in secrecy, state secrets, or behind the high walls of private, guarded territories. A naked person certainly looks pitiful and helpless when watched by people in uniforms (especially those with epaulets). However, in a bathhouse, he is quite calm and relaxed. In the bathhouse, everyone is equal.

And, most importantly, you can’t really get too hot in your clothes! Total surveillance is a very convenient thing. What else do diligent servants do if not keep a close watch on their master? In a good, expensive restaurant, the waiter won’t take his eyes off you—better than any spy.

And we feel pleased when the shopkeeper at the corner store, without asking, pulls out exactly the ice cream we always get from the freezer. But if a stranger calls us in the evening and asks with a threatening tone, “Citizen, why did you take the chocolate on a stick today instead of the usual vanilla?” it feels a bit unsettling.

In other words, we do not feel threatened by total surveillance if we know who is watching us, why, and for what purpose, and if we are confident that the observer is either unable or unwilling to harm us. The existing systems of monitoring and tracking our actions cause us anxiety and distrust precisely because they do not provide us with such guarantees. Imagine that a camera is forcibly installed in your car, with the footage being sent to an unknown location and used for unknown purposes. Outrageous! Yet, more and more people are voluntarily installing dash cams in their vehicles because such devices can protect the owner in the event of an accident.

Even now, when a person goes into the desert or the mountains, they take devices with them that ensure communication with the outside world for their own safety. Tomorrow, these devices will evolve into universal recorders, and we will feel very uncomfortable without them. People will want others to know where they are and what they are doing. In a society where some people constantly document their activities while others do not, criminals will choose unprotected victims for their nefarious plans, prompting society to build such protections. In a city where surveillance cameras are installed in many homes, a house without them is more likely to be robbed.

A “total surveillance” system with an unturnoffable and unregulated monitoring function for the watchers, guaranteed to be impossible to falsify or destroy data, and with complete access to any information about oneself, will make our lives simpler, safer, and more comfortable, while simultaneously making life as difficult as possible for criminals, especially those who currently refer to themselves as the “elite.” These are the ones building a system of total surveillance (no quotes needed) to preserve and increase their own power and wealth.

A public, independent information system, operating on the principles of multiple duplication and distribution of information, will be capable of tracking, storing, and providing any user with any information about legal and informational relationships between individuals, including information about who has requested information about the user themselves. This serves as a technical prerequisite for the development of reconism. The primary evidential tool for the truthfulness of the provided information will be the continuous registration of changes in its state. Practically, this will replicate human understanding of truth.

Such a tracking information system (TIS) should not be centralized, where each person or item has a single “main” account. From the perspective of wikification and the ideology of peer-to-peer, the concept of a centralized account makes no sense, just as “logging in” does not make sense on the internet. Accounts for a specific person exist in social networks, supermarkets, banks, transportation companies that issue passes, taxi services, and even in surveillance cameras that notice you every morning as you go to work, at your workplace, and even with your neighbor. A “unified” account can be forged and manipulated. Multiple accounts are not subject to manipulation.

The OIS will emerge as a unifying framework, bringing together many projects that are already starting today. It will arise in a way similar to how Google emerged for the Internet. Importantly, Google is not the only search engine. Similarly, the OIS will not be something centralized or singular. The OIS will simply allow us to track history: “I came from there, arrived here, did this in this place, and ended up here because I came from there, and I wanted to come because certain needs arose, and those needs emerged because specific information was obtained.” The OIS could be a purely virtual term describing a set of technical measures that enable people to share information about each other in various ways.

The architecture of the system, based on P2P principles, will allow the network to exist and perform its functions independently of the will of individuals or organizations. This will protect historical data from manipulation.

On one hand, the emergence of such a “Big Brother” can be oppressive, but, for example, when a person is in a public place, almost everyone passing by is aware of their presence yet does not react to it. Just as each person knows that those walking by are nearby. This is not a desert, and the number of gazes does not evoke either fear or surprise. Each of us is completely indifferent to most people, just as they are indifferent to whether or not those people know anything about them. However, if someone close to them tries to find out something, they can do so quite easily. The impossibility of secret surveillance renders surveillance meaningless.

At the same time, the system does not require guarantees of absolute and total accounting for everything in the world, down to cocktail straws. Sooner or later, a level of information tracking will emerge where missing or untracked information can be reconstructed from what is available. For example, if the system knows that a certain Ivanov is on vacation abroad because it recognized him crossing the border and did not register his return, it will automatically exclude him from the buyer database at a supermarket in Zhytomyr. The system won’t be fooled if, say, Ivanov was spotted half an hour ago buying a beach lounger near Odessa, and then, in Kyiv, a person who closely resembles Ivanov, with a passport that looks very much like Ivanov’s, is trying to buy an expensive car on credit.

When interacting with the system, speech will also be recognized just in case, and it will be checked whether the person is looking at the camera. Why is serious recognition not necessary? Simply because the system, upon detecting an individual in a certain location, can always track how they got there, where they were before, and where they were even earlier. If the logic and validity of movements are continuous, then we are looking at the same Ivanov who left his house an hour ago, bought a subway ticket 40 minutes ago, got on a train 30 minutes ago, and was also recognized by surveillance cameras, exited the train 10 minutes ago, and was identified by a camera at the nearest intersection to the supermarket 5 minutes ago. Moreover, this person always carried a communicator registered in their name.

Items can and should also be identified using a tracking information system, and it is not necessary to embed any physical identifier in the items. It is sufficient for the system to determine the moment of acquisition of the item and track its whereabouts as it appears and disappears, accordingly adding it to or removing it from the individual’s inventory.

Recognizing objects is much easier than recognizing people. Naturally, equipping material goods with tags, such as radio identification chips, greatly simplifies the tasks of property control. These tags appear simply because it is more convenient for stores to sell products this way. The absence of a chip that should be on a certain type of product should raise concerns for users of information systems, and they may want to check whether a similar item has been reported missing and if it’s possible to track the movements of the new owner to the location where the lost item was last seen. Thus, having tags on all reasonably valuable items is in the direct interest of the owner.

Complex physical items can have multiple identifiers placed on important components of these items. It would be in the homeowner’s interest to install sensors and cameras accessible to the information system inside the house, so that a thief, for example, cannot hide a valuable item in a box after destroying its tag, remaining unnoticed by the system. Although the mere possession of an item without a tag should raise suspicion, it would be even more difficult to resell that item, as the information system would not be able to register the transaction.

The need for locks on doors may even disappear, as it will be reliably known who entered and exited, and when. In the event of a lost item that cannot be found, the circle of suspects narrows down to one specific person who was near the item before it went missing.

Identical items can continue to be marked with the same chips, just like now. While the item is new, it doesn’t matter whose it is. Over time, items acquire distinctive features, scratches, stains, the owner’s initials, and even scents, and the tracking system will be able to easily identify the ownership of the item.

Once again, let’s emphasize a very important feature of identification systems: artificial identifiers such as passports, radio tags, barcodes, bank cards, etc., play only a supplementary role. Having them simply makes the work of the identification system easier, allowing in many cases to bypass the resource-intensive task of image recognition and tracking an object’s history by simply reading a digital code. The current level of biometrics allows for such “tricks” that for withdrawing money from an ATM or making a purchase at a supermarket, no additional identifier is needed other than oneself. A person can be reliably identified by a number of parameters, ranging from fingerprints to the shape of their ear. A properly functioning recognition system will always use multiple parameters and expertly assess the likelihood of a particular individual being in a given location.

Even if you aren’t recognized right away, for example, at the airport when you return from vacation with a new hairstyle and tan, the system will say to itself, “Semyon Semyonovich!” and slap itself on the forehead, looking at who you call on your phone, what address you’re heading to, who is picking you up, and so on. The algorithms of the system should be designed to identify an object not from the entire dataset, but from a probable one, narrowing down the search to identifiable individuals among the relatives of those meeting you. Or among those who purchased return tickets while still at home, or simply from the list of passengers who checked in for the flight. An unidentified “stranger” stepping off the plane instantly acquires a “dossier” on themselves. No one will specifically take fingerprints or a voice sample. They will eventually end up in the system one way or another.

Unlike the existing identification system based on passports or other documents, such a multifactor “recognition” system is extremely difficult to deceive. It can flexibly adjust the level of strictness depending on the importance of the action being performed by the individual. For example, a subway turnstile might be equipped with the most basic contactless card reading system, without any cameras or retina scanners—if that card is stolen, the loss is minimal. However, at the moment of applying for a loan, one can bring out the heavy artillery, including DNA analysis and questioning close relatives about your authenticity. And even if you have an indistinguishable fake passport in someone else’s name, it will be of no help to you at all.

What proves that I am who I truly am is not my passport, but the people who have surrounded me since my birth. No matter how beautiful a spy’s “legend” may be, they can always be exposed simply by showing them to their classmates with whom they supposedly studied. In such a system of tracking the continuity of history, any additional identifiers would only be needed just in case. By the way, this method of identifying everything and everyone makes solving the problem of editing history, at the very least, quite complicated. Those who control the present cannot control the past. Any attempt to rewrite history will inevitably disrupt continuity, whether of the object itself or of the objects surrounding it. A social media user profile with a sufficiently long history of activity, complete with photos, events, friends, messages, and comments, provides more confidence in who you are dealing with than a passport does.


One of the authors of this book had a rather telling experience. While awkwardly backing out of a parking space, he bumped into the car behind him. It was a weekend, and no one wanted to call the police and waste half the day dealing with paperwork. Moreover, the situation could have been turned upside down with a “proper” arrangement with the police, and if the author had been a shameless liar, he could have claimed that the other car had hit him from behind instead of him hitting the car behind him.

At the same time, the cost of repairing the victim’s car was unclear, and no one had that kind of money on them. The author, noticing that the injured driver was a young man who was likely familiar with the internet, suggested that he write down all his social media profiles, check their existence using his smartphone, and then part ways. If the author were to deceive the victim, all his friends and acquaintances would find out. Reputation is worth more than a couple of thousand hryvnias.

That’s exactly what they did. A week later, the victim sent the author a scanned copy of the invoice from the service center, detailing the cost of the repairs, and the author transferred the money to the victim’s card without meeting him again. In fact, this was a precedent for a completely new kind of relationship, where the role of the state as a provider of “violence for the sake of justice” was entirely unnecessary, and social media profiles proved to be more powerful than police protocols.


Absolutely transparent from the inside, the system is very well protected from the outside—it’s practically impossible to infiltrate it unnoticed or by impersonating someone else. You can’t fool an identification system based on historical information. It’s like trying to show your mom another person and claiming that he is her child. She knows that’s not true because she has been watching you continuously since birth. A passport or a birth certificate, or some chip, won’t convince her. Tomorrow, even credit cards will be unnecessary. You walk into a store, take what you want, and leave. The bill will come automatically.

Moreover, the standard identification document implies a state monopoly on the production of such documents. And where there is a monopoly, there is also corruption, abuse, and inefficiency. The OIS completely eliminates the need not only for passports but also for any certificates and paperwork, depriving officials of one of their main levers of influence.

All people are “Big Brothers.”

As you envision the computerized future described in this chapter, you might recall the dystopias and inhumane horrors depicted by authors in the cyberpunk genre. However, upon closer inspection, it is the current system—rooted in the alienation of individuals from society—that proves to be truly inhumane. This is a system where the media is imposed on the mind, creating an illusion of community.

Migration to megacities has led to a situation where we know nothing about our neighbors and often don’t even greet them. There’s little point in building relationships or keeping track of our neighbors’ reputations—they’ll likely move away in a few years. We don’t make an effort to get to know our colleagues either, as we don’t live with them for a lifetime. Around us, there are not people with stories and reputations, but rather just “a passport number, who issued it, and when.”

Reckonism, on the contrary, makes society more humane by returning it to a state that is most comfortable and safe for existence — a state of community, where everyone knows everything about each other. Physically, a person cannot keep track of many social connections. [96]. In this case, the OIS comes to his aid, bringing comfort and safety precisely through the “humanization of faces in the elevator.” This is just one of those moments when humanity, at another stage of its development, needed yet another crutch. Just as books were once needed to facilitate the memorization and accumulation of knowledge.

Yes, the situation where everyone’s lives are open to each other seems fantastic today. At the same time, this fantasy does not stem from its impracticality or from the objections of individuals (almost everyone is willing to reveal their bank accounts to see the accounts of the prime minister or oligarchs), but rather from “public opinion,” which holds that violating privacy is wrong. Everyone is personally “in favor,” but believes that “no one will actually go for it.” So if this is feasible and acceptable to almost everyone, why not start moving in that direction?

The idea that people won’t disclose information about themselves (that they won’t want, figuratively speaking, to live in houses with transparent walls) is somewhat naive. People have already done this. Moreover, it’s not mutual, as it should be, but one-sided. They have long opened their accounts and all their financial transactions, but not to each other, rather to the authorities. It’s not difficult to track every penny of a respectable household’s budget. Yet it seems unfair that your accounts can be monitored, but you can’t track the accounts of government officials or criminals. Which wall is more honest? Transparent or one-way mirror?

The trends towards increasing control over the circulation of goods and money are evident. At the same time, both the laws supporting privacy and public opinion on the necessity of such laws will become more pronounced. In other words, there will be a growing legal privilege for the ruling class to intrude into the affairs of others, while ordinary citizens will be deprived of this opportunity. A moral framework with double standards is being established.

The signs of such morality are everywhere. In tinted windows, five-meter fences, offshore accounts, and so on. Here it is — the real instrument of power. This instrument is the monopolization of the right to information, and the way to fight against criminal authority is not to stop paying taxes or to withdraw money from banks, but to allow other people to know as much about you as the authorities do, and to demand that the authorities disclose the same.

It may seem that mutual transparency completely destroys privacy. However, it is precisely mutual transparency that allows people to identify violators of personal privacy and hold them accountable. Thus, mutual transparency ensures true privacy, as opposed to the illusion of taboo that exists today. It is also important to distinguish between privacy and invisibility. We are visible to everyone in a crowd, but that does not violate our privacy. On the contrary, we attract much more attention when we walk down the street wearing a Guy Fawkes or Batman mask. Generally, no one thinks about protecting their privacy unless there is a specific information collector, a “Big Brother,” that they can point to. [48]. Текст для перевода: ..

A good illustration of how privacy is implemented in a transparent society could be a nudist beach or a restaurant. [50]. It seems that everyone is open with each other, yet it’s not customary to stare at other people, and the actions of an observer won’t go unnoticed. An example of how security is implemented through openness can be seen in the practice of not locking doors in small peaceful towns. No one would want to be found in their neighbor’s house without permission, even though anyone could technically walk in.

If a person is able to know who is watching them and when, then they will also be able to put a stop to the observation and hold the observer accountable for their actions. The issue isn’t about the ability to spy on neighbors’ intimate moments, but rather about neighbors knowing that you are doing it right now or have done it in the past. It’s not about everyone being able to eavesdrop on someone else’s phone conversation or peek at their messages. The question is whether everyone will be able to know who is eavesdropping or spying, and also, without asking, determine the motives behind those actions and bring attention to the unethical behavior of that person.

Only by knowing everything about those around us and about the authorities can we be sure that our rights are not being violated. The ruling elite imposes a completely different concept of privacy, suggesting that everyone should walk through dark alleys with their eyes closed. The authorities claim that we can rely on law enforcement to ensure our safety, to show us who, how, and where to go, and when to duck to avoid a blow to the head, promising that criminals will also close their eyes.

But criminals are criminals for a reason—they peek through the cracks in the system whenever they get the chance. At the same time, they actively exploit the forced blindness of honest people to hide their crimes. In a transparent society, even if they manage to commit a crime, they won’t be able to enjoy the fruits of their labor. If every financial and commodity transaction is recorded, how can they launder their illicit gains?

Any society primarily revolves around the needs of households that earn and spend money. If you want to benefit from these households in some way, you need to figure out how to do so without using money. It’s hard to imagine viable illegal schemes under such conditions. The drug mafia, in any case, sells drugs for money, which comes into the mafia when a drug addict buys another dose from a dealer. Money is the weak link in the drug trade. If they could operate without money and instead take, say, donated blood for drugs, they would have already done so.

The turnover of unrecognized (unregistered, with no history) goods is problematic because anyone can take an unrecognized item and claim it as their own. This means that the owner of unregistered goods puts their very ownership rights at risk.

A “shadow” economy under a regime of reconization might be hypothetically possible, but it would result in a standard of living comparable to the Middle Ages and subsistence farming. It would be extremely difficult to maintain any kind of reputation while driving around in a Maybach purchased with criminal proceeds, especially when anyone can use public information systems to see that your “white” income is barely enough for a Lada.

Any “outsider,” if they are truly outside the system, must be 100% outside the system—meaning they shouldn’t go to the store or even acquire anything from others who do shop (since those who go to the store need money that circulates within the system, not something else). Any “parallel economy” is not just about “shadow transactions,” but also about a parallel system of producing goods, as complete shadow means a total refusal to engage in transactions with the system. If someone is outside the system, then they do not exist for it. The system is indifferent.

Active social networks

Tracking and accounting are just one side of the coin. In addition to passive information gathering, an active component is necessary—a platform for collaborative decision-making and coordination of actions. The series of Arab “Twitter revolutions” was the first wake-up call. So far, social networks have demonstrated their effectiveness for short-term coordination of large groups of people, but they are not very good for long-term planning.

Current social networks are not tied to real organizations or communities in any meaningful way. While social media platforms do support the concept of a “group” or “community,” there is neither the desire, nor the motivation, nor the tools for the members of such a “group” to be actual members of a real community. Most of a user’s “friends” are not real friends, and “friendship” on social media typically does not lead to genuine friendships among people. Essentially, individuals lack shared real interests. Instances where virtual communication leads to real-life interaction only occur when participants in the social network become involved in some tangible transaction. Someone asks someone else to pass something along, someone shares something with someone, someone buys or sells something from someone, or someone organizes a common project or acquires a shared resource for the entire group.

Even thematic communities, created from users of certain real objects, are fundamentally detached from the real world, and their actual gatherings “offline” are precisely that attempt to escape the virtuality of the community of which they are members. The opposite is also true—online services aimed specifically at making real transactions rarely lead people to form genuine close relationships.

A watershed moment is emerging: websites are dividing into those meant for transactions and those for communication. However, there are no real forces keeping people engaged online. No one is dependent on anyone else, and “likes” in a collective blog do not translate into real value. Sooner or later, society will become saturated with social networks, as they are not truly social in the literal sense. The social aspect within them is virtual and fabricated. There is no trust in virtual “friends,” there is nothing to share with them, and starting or ending a friendship is very easy. Information and entertainment can be passively consumed without engaging with others, and importantly, without encouraging other members of the virtual community to interact. This overall passivity will initially reduce the number of active contributors, and then existing authors, losing audience support, will also stop writing. This has happened before in human history, such as with the rise and fall of the amateur radio movement, which is now a rather rare phenomenon. Social networks may lose popularity and, in any case, must evolve.

It is reasonable to conclude that the only path social networks can take is towards their “devirtualization.” This would involve participants in the networks being members of real communities, with the networks themselves representing a complex intertwining of actual groups united by shared membership in multiple groups simultaneously. On the other hand, real communities have no motivation to move into the virtual realm. They have very concrete earthly needs. Additionally, a real community does not gain any “added value” from being present online. Members of a real community receive higher-quality communication, and there is little point in discussing anything else.

If we understand where real communities come from and what exactly brings people together, we can speculate on what kind of “added value” should be offered to these real communities to encourage them to go online. Perhaps real communities are united by a certain idea? It seems that this is not the case. Employees of companies are often not united by a common idea, and even less so are the residents of a single building.

Common goals? Perhaps. There is a shared goal among passengers on an intercity bus—to get from point A to point B—but the only thing that unites them is the bus itself. It seems that the key lies in the “common bus.” Any real social structure is built around a common resource that these people use or create. Even ideologically driven organizations become organizations when they start collecting membership fees and deciding how to spend them. Until then, people do not have any common interests, only a shared opinion.

It turns out that a key drawback of existing social networks is the lack of a unifying common resource, which leads to a disconnection between people and the virtual group of which they are members. Therefore, an effective method for “virtualizing” real communities would be to offer a tool that facilitates the shared use and management of a real resource.

One can imagine an internet service designed to bring people together around a common resource and to develop solutions related to the management of that resource. The service would act as a marketplace for resource providers, contractors, and organizers. It should be reputation-driven, where users rely on both numerical reputation (karma, badges) and natural reputation, which can be tracked through user activity, their posts, comments, initiatives, and reviews.

Users can be members of different resource-dependent groups, act as resource providers, and serve as resource administrators. The upward flow of information is supported by the natural structure of the service, designed with the experience of social networks in mind. The mobilization of groups and the elimination of the free-rider effect is achieved by introducing a new incentive—digital reputation. The stimulated group transforms from latent to mobilized, enabling it to quickly make optimal decisions.

The alienation of the administration from the resource owners is eliminated through the “immediacy” of their powers, the transparency of their activities, and the openness of discussions about their work. People follow the opinions of various experts on the issues at hand, using social media mechanisms (likes, karma, reputation). Each person chooses a leader in such a way that they can change their preferences at any moment, which deprives the leader of “excess weight” in the discussion of decisions and prevents the alienation of the leader from their followers.

User motivation to participate in the service comes from direct economic benefits and the convenience of making important collective property decisions.

Users are also motivated by having complete control over the outcome of their collaborative efforts. They all have the opportunity to participate in the project solution, allowing them to assess the results of the idea’s implementation and, accordingly, evaluate the work of the leaders and administrators. This will lead to an increase in their reputation in the case of a positive outcome and a decrease in the case of a negative one.

The decisions themselves and their implementation, including the accounting, are completely transparent and accessible to those who are interested.

How are collective decisions made now? For example, the residents of a building decided to install a barrier in the yard. Currently, it looks like one of the active residents goes door-to-door, proposing a meeting in the evening to discuss the details. At the meeting, half or even fewer of the invited residents show up. The activist presents the idea and suggests one or more possible solutions. Often, the solution they advocate is subjective and contains elements of corruption. In any case, someone will say, “I agree, but that’s too expensive. I’ll check how much it costs myself.” The decision gets delayed for another couple of months. Sooner or later, people manage to gather half of the required amount, trusting the activist. A wealthier individual usually covers the remaining costs, hoping to collect debts later. The main thing is that the barrier works. Eventually, the barrier is installed. Some people shirk their payment, but this is soon forgotten. Someone says, “I don’t have a car, so I don’t need it.” Some will pay more, while others will pay less. A lot of time is wasted from the idea to the decision, the decision itself is not flawless, and there is still no genuine participation from everyone.

How can this be? There is a service that features a community of residents in a building who are already united by a common resource (the building) and shared expenses for its maintenance (utilities). On a common “board,” one of the residents writes that it would be a good idea to install a barrier gate. His message gets “liked” by several other residents. A discussion of ideas begins. Eventually, a couple of people find suppliers of barrier gates on the same service, and everyone can see their reputation, prices, reviews, and examples of work. People decide which barrier gate to order and use the “like” or “dislike” buttons. The most proactive person, based on the discussion, opens a new group for the new resource — the barrier gate — and invites others to join. The cost of the barrier gate is visible. It is clear that as soon as the necessary funds are collected, the service will automatically make the payment for the resource.

Once the resource is acquired, it is made available for use by all residents of the building. Everyone pays a “rental fee” for the use or maintenance of the new resource, while the “shareholders,” meaning those who didn’t hesitate to pay for the barrier upfront, receive compensation from the non-paying residents, which is spread out over time and acts as an investment return. If everyone has paid, the “rental fee” should compensate for the “investment return.” If someone who rarely visits the building hasn’t paid for the barrier, they will pay “rent” based on how many times they have passed under the barrier or on a monthly basis.

The consequences of this project will be evident in discussions that will shape users’ reputations, the weight of their voices, authority, and seriousness. Reviews of the supplier’s work will be accessible to all users of the service, not just members of a specific community.

Another example: bus routes. What is happening now? Buses are running on schedule. However, trips during off-peak times are unprofitable for the carrier, while trips during peak times are inconvenient for passengers. It is impossible to perfectly predict demand and supply and adjust the vehicles accordingly. It is also not feasible to pay exactly what a trip is worth. Additionally, it is impossible to request a route that seems beneficial to a group of passengers, but whose profitability is not clear to the carriers.

How can this be? There is a service where transportation providers display the schedules they prefer for their vehicles. People can sign up for specific trips, forming groups to share the resource. Consequently, the cost of a trip for one passenger will increase if the route is “exotic” and requires just one taxi, and it will decrease if the route is in high demand. Carriers do not take risks and therefore do not speculate on trip prices during peak times to compensate for off-peak times. At the same time, to maintain their reputation, carriers must operate with complete transparency. Their reputation is built on passenger reviews.

Passengers themselves, at the initiative of one of the service users, can create a new flight or schedule, while carriers can bid for that flight. A preliminary payment or agreement for automatic deduction of funds after the application is submitted serves as a guarantee for the carrier. Passengers may be offered options like “I’m willing to pay less, but leave later/earlier/within an hour.” They can promote their flight through social media to invite more people, thereby reducing the cost for themselves. Passengers can also purchase a flight from the carrier, effectively investing in that flight or an entire schedule, and earn investment income from selling seats on the bus to other passengers.

In the end, a group of people organized around the resource “Route A-B and back” can, with the help of an initiative administrator, find a driver and a bus, paying him for regular trips along the route and earning from selling this resource to other users and to themselves. This means that if everyone uses it equally, they will all pay the same amount. However, if someone pays a “founding” fee but uses the bus more or less than the others, they will pay more or less than the rest.

From the end user’s perspective, the service resembles a hybrid of a social network, an online store, and a payment system that implements subscription payments. However, it is not just a social network; it reflects the real relationships between people. Here are the key differences of this network from a regular one:

  • Groups are formed based on a common resource rather than shared interests.
  • All participants are identified by their real names and credentials, with no virtual identities, avatars, or nicknames allowed.
  • The reputation of participants is monitored on an ongoing basis and is not limited to the confines of a single group. For example, information about a supplier fulfilling a contract with quality and on time is always available to everyone, regardless of which groups they belong to.

Such a system could serve as a platform that replaces the registration process, as well as all statutory activities (meetings, supervisory boards, audit commissions) for partnerships or joint-stock companies. After all, essentially, all statutory documents are registered with government authorities to ensure their immutability, and all company documents are merely reflections of the entries in existing registries. For example, in Singapore. [97]. Even now, all such procedures are conducted online, and the concept of a “stock certificate” is nonexistent, as anyone can visit the website to view the composition of the joint-stock company, the number of shares, the charter, activities, and financial reports. Everything is transparent and free of bureaucracy. A joint-stock company is nothing more than a group of people (or other entities) gathered together to jointly utilize a resource. However, shareholders, needing administration, will always agree to corrupt losses, as they have no (or rather, have not had until now) other options.

The practice of taking expenses upon oneself, stemming from a gradual loss of trust in the existing bureaucracy, will lead to an increasing number of groups, with their growing numbers, finding it more convenient to pool their money and do something rather than wait for officials to build a road, power plant, ship, or railway. With the development of mass accounting systems and active social networks, it will be possible to implement larger and larger projects. And if the residents of a city suddenly want to build a bridge, they will build it. After all, a bridge is an investment, and by making its use paid (and with the advancement of accounting technologies, tracking who used the bridge and how much will not be a problem), they can ensure a comfortable retirement. Ultimately, the state, as a necessary structure existing to administer tax distribution, will become almost unnecessary. People will figure out for themselves where, how, and on what to spend their money.

The network of the future

The term “Internet” is most commonly understood to refer to the World Wide Web. However, these are not the same thing. The Internet was established on October 29, 1969, while the World Wide Web became publicly accessible in 1991. It is just one of many services provided by the Internet, alongside email, instant messaging, voice and video communication, and file sharing. Yet, the World Wide Web has now become synonymous with the Internet, and the underlying protocol, HTTP, is currently the most widely used. To be honest, it is used in a way that is not entirely intended. HTTP stands for “Hypertext Transfer Protocol.” Its creator, Tim Berners-Lee, designed the web as a network of interconnected documents filled with hyperlinks—something akin to a global library. However, the Internet has long since outgrown that stage. The network of today and tomorrow is not a network of documents or a library; it is an incredibly complex model of the real world, a full reflection of it, where documents play only a minor role. It is a network of people, things, money, ideas, places, corporations, states, their relationships, and combinations.

Modern web applications have little in common with websites from fifteen years ago. The concept of a globally linked library has long become outdated. Facebook is not a library, nor are Twitter, Google, or Amazon. Today, many services built on the HTTP protocol have nothing to do with hypertext. A new set of foundational constructs is needed, one that is much broader and more universal than documents and hyperlinks.

Social features, reputation systems, electronic payments and trading systems, cryptography, file and message sharing, cloud storage and synchronization, voice and video communication, interest-based groups and circles, search, filtering, and recommendations are utilized to varying degrees on almost every sufficiently developed website. Some of these functions, such as cryptography, have already become part of the basic protocols, while most are either implemented anew by the website’s programmers each time or taken in ready-made form from major providers—search from Google, social buttons from Facebook, payments from PayPal. Almost all of these functions are still highly centralized and dependent on the will of small groups of individuals. The risks of this were clearly illustrated by the case of the persecution of Wikileaks. During the publication of secret communications from U.S. diplomats, the website wikileaks.org was subjected to a powerful DDoS attack. Under pressure from U.S. authorities, the administrator of the .org domain zone, EveryDNS, blocked its domain, Amazon.com refused to provide hosting services, and Bank of America, along with payment systems PayPal and Moneybookers, froze Wikileaks’ accounts. Visa and Mastercard blocked donation transfers. The site continued to operate only because volunteers created and maintained over a thousand copies (“mirrors”) around the world. If the U.S. government had also managed to censor search engine results, access to the site could have been almost completely cut off.

For the network of the future, decentralization of these functions is vital. Only then will the network not be divided by corporate and state boundaries into easily manageable and vulnerable segments. Only then can information technology become a solid foundation for a new social order.

The fact that the architecture of the network should be based on distributed technologies does not exclude the existence of large and very large websites or data centers, but it reduces dependence on them, which can ultimately serve as a guarantee of their inviolability. Authorities will understand that if they shut down Google, Facebook, or Twitter today, a distributed structure will take their place tomorrow (literally tomorrow!). It may be less efficient, but it will be beyond any possibility of control or negotiation with management, due to the absence of such management. The mere existence of distributed services will compel authorities to behave loyally towards large internet corporations. Currently, the largest nodes on the internet resemble skyscrapers in the middle of a desert. In the future, they will not disappear, but they will be surrounded by “low-rise development” that supports, backs up, and duplicates them. Just as in file-sharing networks or the Skype network, most of the resources are provided by the computers of ordinary users, but at the same time, no one prohibits using the services of cloud providers or data centers for an additional fee.

Instead of documents, a key object in such a distributed network can be a more abstract unit, which we will simply call a “resource.” A resource can be anything — a document, a blog post, a file, or even an object from the real world. Metadata is used to manage and describe these resources. A file containing metadata, or a metafile, should play a role that is currently divided among domain name systems, search engines, torrent trackers, Wikipedia, and reputation and rating systems. This is a “label” that contains all the essential reference information about the resource. Similar “labels,” or if you prefer, “passports,” can be provided for user accounts or other active participants in the network, which we will collectively refer to as “agents.”

Metafiles and resources that belong to or are related to a specific user can be stored both in data centers that provide commercial services and in a cloud made up of the user’s own computers and those of their friends, colleagues, or relatives. Such a scheme is already being used by file-sharing and distributed social networks, such as Diaspora (http://diasporaproject.org/).

Each request to a resource is recorded in its metafile, and the resource itself, along with a copy of the metafile, is stored on the computer of the user who accessed the resource for a certain period of time. Thus, any requested information is duplicated multiple times.

Using sensors, scanners, video cameras, and microphones, data about people and objects in the real world is being transmitted to the network—tracking history just like with resources within the network. There are already multi-camera security systems that monitor the movement history of objects between different cameras. Nodes equipped with cameras and sensors will be able to do this on the scale of an entire city or country. And since the network is decentralized, such information cannot be monopolized by anyone.

Information about the resources that the agent is interested in is accumulated and serves to facilitate and speed up the search for similar information in the background. Moreover, this agent begins to assist others in finding information that aligns with its interests.

Let’s assume that the user enters a query in the search bar. The search for resources that match the query occurs in two stages. In the first stage, nodes that are semantically closest to the user’s query are identified. In the second stage, these nodes return a list of resources that are most relevant to the query.

In other words, if someone wants to find information about fishing, they first look for agents who are dedicated anglers, and then these agents provide them with all the necessary information. Their authority and reputation, built up over their entire previous online presence, serve as a guarantee of the relevance of the information provided. Search spam in such a system is virtually impossible.

It is not always possible to obtain a comprehensive and consistent answer. Generally, there are several different, often conflicting opinions on any given issue. It would be incorrect to try to calculate the best solution using any algorithm, as this distorts the real picture. The information that there is no single opinion is very important for decision-making. Users should not be deprived of such information. To identify groups with differing opinions, it is necessary to conduct cluster analysis. [98]. , allowing people with opposing views to coexist constructively rather than conflict.

The advantages of clustering groups with similar interests but opposing opinions also include protection against spam and manipulation. Any entity attempting to create an army of virtual accounts is gently and unobtrusively isolated from the rest of the world within its cluster, where it and its bots can endlessly upvote each other and flood the space with posts containing promotional links. The same applies to hyperactive online crazies.

The information contained in metadata may include user ratings (“+1”). These can be accompanied by clarifying labels or keywords. This way, a reputation for agents and resource ratings are formed that are much more sophisticated than today’s “one-dimensional” digital karma or ratings.

Firstly, having a separate score for each keyword will help avoid situations where a low rating in one area overshadows a high rating in another. Secondly, it becomes possible to track who is giving positive or negative ratings, thereby eliminating the chance of random fluctuations caused by a “mob of hamsters” who are not knowledgeable about the issue.

Moreover, reputation can be assessed individually for each query. For example, someone is choosing a doctor for treatment. The system can establish a trust chain between the patient and the doctor by finding several people with medical backgrounds among the patient’s friends, and among “friends of friends,” someone who has previously been treated by that specific doctor. Their ratings and reviews will carry much more weight than those from random individuals.

Moreover, the algorithms used to calculate ratings and reputation can be selected and compared with each other, and even mixed in certain proportions. Doubt it? Switch to a different recommendation algorithm and see for yourself. Don’t trust algorithms at all? Use a rating compiled manually by an expert or a professional community. It’s similar to boxing, where, despite having the same basic rules and techniques, there are many versions and formats of competitions — all those WBA, WBO, and WBC titles. Just like in boxing, where unification bouts can take place between champions of different versions, you can choose which algorithm to use for processing publicly available basic metadata. This way, it will be clear who is who. There will be healthy competition among the algorithms.

Currently, searching for relevant information online is done either by domain name, through Google, through structured directories like Wikipedia, or passively—via Facebook walls or tweets. The system of metafiles that creates a “semantic map” of the web combines the strengths of all these approaches. [99]. Текст для перевода: ..

Since the search occurs in two stages (first agents, then resources), the user not only receives information but also new connections and contacts. By actively engaging with a particular topic, a person automatically becomes part of the community interested in that topic, without the need to register on specialized forums, join groups, etc.

The network should keep track of all its physical resources (megabytes, megabits, etc.) and their consumption by participants. The overall balance of the network over any extended period of time must always be reconciled. Just like with megabytes and megabits, any other tangible assets can be integrated—dollars, euros, goods, services. Exchanges and payment systems are needed for resource trading. An integrated system of micropayments, microloans, and resource exchanges within the network’s basic structures will virtually eliminate the inconveniences associated with paying for any content and significantly reduce its cost, thereby addressing the issue of piracy. For the vast majority of people, the convenience of using a legal and comprehensive digital content database will outweigh the desire to save a few cents by downloading for free and damaging their reputation. A necessary condition for the operation of such a scheme is the guarantee that the price of any content must remain insignificant for the user, allowing them to download without fear of receiving a hefty bill at the end of the month. It’s similar to how we currently pay for electricity. The price per kilowatt is practically constant.

On the other hand, unlike standard kilowatts, the quality of what we download from the network can vary significantly. To account for this, a mechanism for pricing content based on its ratings and reviews could be implemented. This mechanism can be referred to as crowd pricing.

In the coming years, it will become possible to create a distributed information system on a global scale that will contain a sufficiently accurate and detailed model of the real world. This system will enable universal accounting and tracking of people, objects, and goods both within the network and outside of it, the calculation of multidimensional (similar to character parameters in a role-playing game) and context-dependent reputations for individuals and organizations, payment for any goods and services, funding for any projects, and real-time discussion and collective resolution of any issues without long-term delegation of authority.

Such a system will enable a leap in human development comparable in significance to the Neolithic Revolution. It will effectively become the nervous system of our planet, uniting all of humanity into a structure that is stronger than any state of the past or present, while also being incomparably more flexible and free.

Just as machines and mechanisms have multiplied human physical strength by thousands and millions of times, writing has expanded memory capacity, and computers have increased calculation speed, such a system could vastly increase Dunbar’s number, turning the entire planet into a “global village” where everyone knows each other and trusts one another.


Dunbar’s number is a biological limit on the number of stable social relationships that a person can maintain. Maintaining such relationships requires knowledge of an individual’s distinctive traits, character, and social status, which demands significant intellectual capacity. It ranges from 100 to 230, with 150 being the most commonly accepted figure. The term is named after the English anthropologist Robin Dunbar, who proposed this number. [96]. Текст для перевода: ..

Source: Dunbar, R.I.M. “Neocortex Size as a Constraint on Group Size in Primates” [96].

Leave a Reply

Your email address will not be published. Required fields are marked *