Conceptualizing the Internet as a Space

Madison Ochs
15 min readDec 4, 2020


The internet encompasses a dual identity as a highly sophisticated manmade tool and a rapidly-evolving social environment. It is dynamic, complex, immensely impactful, and highly subjective, and efforts at understanding it would benefit from the use of a metaphor of the internet as a space. Given the complexity of the internet, what is its appropriate spatial metaphor, and what are the metaphor’s implications? Debates about the internet suggest the way people conceptualize the internet informs their views, positions, and ideas. Questions emerge, including whether the internet is organic and free or whether it is manufactured, artificial, and curated. This brings to mind questions about whether the space dictate peoples experience or if people dictate the space’s structure. It is also important to consider management of the space and how that should be accomplished. Three primary subfields inform this examination. First, the (un)commons identity of the internet: is the internet a form of a commons, or does it lack fundamental ideology and characteristics of a public space for social progress? This spatial and ideological discussion is shaped by scholars from diverse backgrounds including Adam Arvidsson, John Perry Barlow, and Jan Michael Nolin. One next considers the subfield of algorithms, referring to engineered structures dictating the experience internet of users. This considers intentions of algorithms and how the control structures dictate the nature of the space. Tarleton Gillespie leads this field but meets challenge from the minds of John Cheney-Lippold and Taina Bucher. Finally, the subfield of regulation is concerned with accountability and management of online activities. How to manage the space plays an important role because regulation stems from the kind of environment the regulators believe they occupy. Legal scholars such as Jack Balkin and Seth Kreimer shape this discussion alongside others who overshadow active legislation, as government-implemented internet policy is reactive and rarely influences the internet. Each subfield supports the general question of how to appropriately conceive of the internet in a spatial metaphor and encompasses fruitful, compelling debates in their own right.

The (Un)Commons Ideology of the Internet

The subfield concerned with the commons identity and overall ideology of the internet encompasses debates about two defining characteristics that factor heavily into considerations of appropriate spatial metaphors for the online space. The debate considers whether the internet can be considered a commons, a shared resource for progress and joint prosperity, and evaluates the ideological foundation of the internet. Whether the internet could be classified as a commons instantly identifies a subset of available metaphors since the commons status is essential to the nature of a space that is or is not for this purpose. Inherent in the commons versus not commons debate are questions of access, contribution, and norms around participation in the space. Ideological considerations are part of this debate as well because they dictate interactions in space, expectations for participants, and set guidelines for conduct along themes of freedom, liberty, equality, governance, and purpose. This subfield is vitally important for strong conceptualization of the internet on a nuanced level.

The commons debate encompasses several key questions that attempt to diagnose aspects of the internet as a space and whether it could possibly qualify as a commons. Chief among them are whether everyone has access, whether everyone can contribute, and where any inequities are found. Arvidsson (2019) sees these factors and argues that the internet is a commons because of its transformative, progressive potential by comparing today’s internet to the commons of European feudalism. The historical commons permitted the destruction of the feudal system, allowing society to transform by motivating social progress and mobility among people. The influence of the space on the people and people on the space is echoed by Nolin (2010). His position, that people and the internet are in a cycle of influence exerted upon both parties by each other, reflects Arvidsson’s transformation-focused argument for a commons internet. Ideologically, the argument for a commons internet is reflected by John Perry Barlow, the founding father of the modern cyberlibertarian movement. His idealistic view hinges on the principle that the internet is the ultimate example of a free space: utopian, liberating, and countercultural. Barlow’s 1996 vision valorizes the internet and rails against the government, asserting that the lumbering giant of bureaucracy would never compete with the unbounded potential of the internet. If this ideological foundation is accurate, any spatial metaphor chosen for the internet would have to accommodate such an egalitarian structure.

Other thinkers argue this is reductive and inaccurate, failing to acknowledge the internet’s nuances. Hand and Sandywell (2002) consider the future built by the internet as going one of two ways: cosmopolis (utopian, progressive, and universal) or citadel (dystopian, de-democratizing, isolated). Their blended proposal, technopoiesis, acknowledges the likelihood of the internet contributing to both futures as a transformative agent. Maddox and Malson (2020) disagree with this noncommittal approach. They assert that the internet does not promote freedom; instead, it coopts the concept to colonize users on behalf of the technocratic ruling class. They allege that the First Amendment, despite appearances, is used by internet companies to police interaction and behavior in online spaces, forcing internet participants to behave according to a narrow standard agreed upon by a subset of the space’s occupants. According to Maddox and Malson, the ideological foundation of the internet is exploitation of the space for capitalist gain. The rapid pace of technological advancement and deeply entrenched views on both sides of the commons debate hold both parties at an impasse.

The Role of Algorithms

In addition to considering spatial characteristics, one must evaluate process and function of the internet. Algorithms fulfill a particular structural need and decide what is seen, by whom, when, and where. Therefore, one could consider the internet experience artificial. If it is manufactured, the next question is how the environment is created. One of the key debates in this subfield is about whether algorithmic controls are intended to act on the space or the user. The control mechanisms debate clarifies how the internet is experienced, a crucial consideration in selecting an appropriate, comprehensive spatial metaphor for the internet. The second debate follows the first and considers whether there can be an ethics of algorithms. Applying norms to algorithms creates a culture to moderate the online space, adding additional context to the question about how best to understand the internet in familiar spatial terms.

The debate about algorithms focuses primarily on the role they play on the internet. It is an undeniable fact that they influence the space, but some argue they mold and dictate the environment directly while others suggest they control the users. Tarleton Gillespie, a foremost scholar on algorithmic influence online, characterizes these structures as intervening and interfering. In 2015, he argued that content deletion and account suspension in online spaces betray the motives of algorithms, demonstrating their intention to cultivate specific interactions and behaviors; anything outside the favored boundaries is prohibited and stricken. He follows in 2018 by declaring that the active role algorithms play in content management makes it impossible to call them moderators; he advocates for a classification of algorithms that encompasses the direct role they play in shaping user experience. His space-focused position is strong, but contradicts other scholars who believe algorithms exert control on the user, influencing the environment through its participants. Thinkers like Bucher and Wilson claim algorithms make users their puppets. Bucher’s 2012 piece about the threat of anonymity and invisibility bolsters this idea by explaining the mechanisms of Facebook’s algorithm and its primary function to force constant engagement with the platform or risk fading into oblivion, forgotten by connections online. Bucher makes this claim by drawing on Foucault’s Panopticism and turns it on its head, using today’s culture of performance and oversharing to make her point. Supporting this idea of control through controlees is Cheney-Lipold, who also draws on Foucault to make salient points about people’s algorithmic identities and the implications they have for user control. His argument that users are the control mechanism is built on algorithmic identity. Algorithms conduct extensive data collection and analysis inspired by biopolitics and biopower structures to build user classifications entirely outside user control, reducing people to data points that are easier to manipulate and surveil online. Though the nature of the control mechanism is a topic of debate, scholars agree that algorithms pose a threat due to their power.

The question that follows is how algorithms might be understood for the sake of developing a code of ethics informed by algorithms’ power. Gillespie attempts to classify algorithms as active internet architects requiring a direct response that acknowledges algorithms’ influence. He does not provide a path forward, however, leaving a gap in the debate and failing to seize an opportunity to inform policy and challenge those building the algorithms. Other voices propose a slightly different idea for understanding algorithms’ consequences online. Specifically, Ananny’s idea for a code of ethics is noble and adds value to the discussion by describing guidelines for how algorithms should operate; but, holding a nonhuman entity to human standards, standards which are not uniform even among humans, is limiting. That said, if the internet could have its own norms, it could perhaps be whittled down to a specific spatial metaphor also adhering to similar guidelines. This debate is incomplete without Wilson’s 2015 discussion of algorithmic influence via various methods by which a state might control its population. This jarring proposal clearly relates to the architecture of the internet and is a compelling image for those seeking an illustration of control mechanisms online. Wilson demonstrates nuanced understanding of the internet, its component parts, and the intentions of algorithms. This position open the door for connection between political action and the internet as a potential tool for the exertion of control over a people, calling to mind questions about how to manage this influence and who should be held responsible.

Regulating the Online Space

The regulation of the internet rounds out the debate about appropriate spatial metaphors for the internet because it unites the internet’s form and function with considerations of accountability, responsibility, and ownership of content and behavior online. Two crucial debates underpin this subfield. The first, how the internet’s regulation should be executed and understood, deals with questions about the purposes and consequences of various aspects of the online space. This debate stops short of proposing policy responses, since these are reactionary and do not answer the overarching question about the internet’s ideal spatial metaphor. It is included here because of the way that proactive regulation and design could shape how the internet is represented as a space. The second debate considers who is responsible for regulation, who has control, and who should be held accountable for the internet. Most scholars agree that some version of regulation is vital to a healthily functioning internet society, but there are differences of opinion regarding the proper mechanisms for implementing governance, not to mention disagreement about which parties should take ownership for governing the online space. The ultimate source of regulatory power online is a deciding factor in which spatial metaphor best suits the internet.

The debate about how the internet is currently set up for management and regulation encompasses a variety of opinions, several of which rely on the concept of function as the origin for the argument’s framework. The general consensus is that function informs internet management because the consequences of the internet’s actions hinge on intentions toward the user and the space. Lehr et al. (2019) propose the idea that each aspect of internet function should be evaluated separately because while parts come together to create the total environment, subcomponents and capabilities have individual implications that must be understood independently. They propose a differentiation between the aspects of the internet that do and do not require policy intervention, furthering their position that the internet in toto cannot be regulated; instead, regulation it must be accomplished at the subcomponent level. Cohen (2017) opposes this idea and advocates for a sweeping approach of the internet on the basis of the entire internet’s functionality. Noting that the internet is a public domain filled with private, personal information, Cohen claims that disjointed regulation in the online space facilitates abuse of data, and therefore abuse of users who create it. The idea of a biopolitical public domain strengthens this position because Foucault’s theories inform other internet debates, and this lens extends discussion of the internet as a data collection and control entity, as opposed to a benevolent tool for public use. Cohen focuses on the importance of regulating the value creation motives of the entire internet as opposed to the pieces that make this possible, clarifying a feasible approach to effective regulation.

The debate about who should be accountable for regulation immediately follows. The government, internet companies, and users all could play a role, but centralizing responsibility would prove far more effective. Balkin (2004) places this duty on the shoulders of the designers and the legislature. Using freedom of expression as the core of his argument, Balkin argues that the American concept of free speech (e.g., unencumbered by the government) forces the responsibility toward designers of internet spaces because they can and should deliberately structure the internet to facilitate freedom of expression in accordance with requirements set by legislative bodies. Therefore, responsibility moves away from the judiciary and onto legislators because regulation must be proactive. The challenge with this view, however, is the American focus. The United States’ conception of free speech and requirements for fulfilling these conditions are not universal. Contrary to Maddox and Malson’s view of the First Amendment as a tool for internet colonization, Kreimer (2006) points out the risks of the government adopting a distant, passive stance toward internet regulation based on the First Amendment. The First Amendment’s protections are valuable, but Kreimer clarifies that they must be applied directly by the government as opposed to through internet companies or other proxies. Proxy governance reflects McCarthyesque approaches and is inherently weak in response to powerful internet companies and the gravity of issues related to regulating an intangible, rapidly-evolving space. Government attempts, however, are imperfect. Section 230, commonly understood as the precise legal concept that permitted the growth of the internet into its current form, is used by internet companies to avoid accountability for regulating their platforms. The legal statute permits a refusal of culpability for user behavior and content implications, according to Cramer (2020). His position does not provide suggestions for a remedy but complicates assertions that the government must be the sole owner of internet regulation. Cramer’s view does not directly contradict this idea, but does compel one to consider the risks of relying on the legislative system to properly manage the enormity of the internet. The omission of legal writing and documentation in this subfield is clear, and it is a deliberate choice due to the fact that the United States government’s approach to internet regulation is reactive and does not influence the internet’s spatial essence.


Upon thorough review, an appropriate spatial metaphor for the internet appears elusive. Its attributes as a tool, community, control structure, and actively changing entity make it an anomaly. According to normal spatial classifications and qualities, the internet should not exist in its current form; it holds in unison traits that should be mutually exclusive, and the judgments of its traits are entirely too subjective for a proper assessment. The commons debate exposes irresponsible idealism on the part of scholars who consider the internet a pure social good intended to uplift all participants. Their assessment is reductive and ignores the harsh realities of toxicity, capitalism, and colonial structures online. That said, those with wholly negative views are ignoring a crucial aspect of truth, choosing to overlook the connection, opportunity, and uplifting aspects of access to more people and more information worldwide. In the end, the internet is likely a commons at times and an un-commons at others, disappointing those who would have it one way or another. False dichotomies plague the algorithmic subfield as well. Scholars on both sides make astute judgments about the nature of algorithms’ control over the internet space. It is entirely probable that these structures are intended to control and merely rely on different mechanisms to accomplish this goal, choosing to shape the environment at times and influencing users in other cases. The question of algorithmic ethics is interesting as well, but seems futile since algorithms are manufactured. While they hauntingly resemble people and seem to be alive in their active involvement online, there are engineers and designers responsible for building these artifacts and setting them loose. Holding the online space to a code of ethics is foolish; instead, the focus should be on those actually in control of the internet: the engineers, the technology companies, and the government. For this reason, regulating the internet requires a revolutionary approach. Policy efforts are reactive and the internet evolves far too quickly to be managed by the slow bureaucracy of today’s government. Instead, the government can and should develop new methods of intervening and monitoring the online space to preserve acceptable activities and to protect users at risk of being manipulated by technology companies. A global governance system might be best, particularly because of the United States-centric policies in place. Whatever the case, internet companies themselves should not be in charge of regulation; capitalism’s influence in this sphere is too strong to permit appropriate self-governance by these entities.

Questions remain not only about how to properly conceptualize the internet but also about the internet’s implications for the future. Technocrats hold enormous sway over the shaping of this technology and influence the government, social trends, and norms. These individuals are a conundrum, since they could be part of a positive change or bring about dangerous corruption in their pursuit of uberwealthy status on the backs of internet users. This brings to mind considerations of labor, unionization, and abuse. The internet’s value comes from the data users produce. If users are creating this product for internet companies to sell, should they not be compensated? Should they not have rights and ownership over their data, its use, and its analysis for corporate gains? Consolidation among this pseudo-labor force would create tension with companies upon whom people depend for connection, entertainment, and daily life. These internet companies band together themselves to strengthen their hold over the online space. What is the risk of internet companies consolidating to create a new version of a monopoly, not one that discourages competition but one that simply controls the entirety of the vastness of the internet? How should these entities be treated, and what are feasible responses in a capitalist society that would never accept something as bold as collective ownership or a truly free commons space online?

It is impossible to know where the internet will go in the future, and hypothesizing about its potential may be fruitless since the pace of change is beyond what can be managed by society. Predictions, while helpful, evolve at a rapid pace because each advancement introduces new possibilities and questions about how the internet could, should, or would be occupied and used in the future. One must wonder whether regulation has a place in the conceptual discussion about a spatial metaphor. It is exceedingly delayed and slow, yet the management of the space certainly dictates aspects of the environment. The goal of these debates should be a tangible connection to actionable policy. Theoretical discussion is helpful, but this is a concrete situation that affects multitudes around the globe. There is no room for retrospective musing, and limiting discussion of clear paths forward risks relegating this important debate to academic circles. Despite its faults, this topic and its subfields motivate conversation about who the internet is designed to serve, what its true purpose is, and whether that purpose should be revised. As it stands, the internet cannot be understood or cleanly described with current metaphors and vocabulary. As a society, an immediate recalibration is required as further delays risk permitting continued change without proper comprehension of the implications.


Ananny, M. (2015). Toward an ethics of algorithms: Convening observation, probability, and timeliness. Science, Technology, & Human Values, 41(1), 93–117.

Arvidsson, A. (2019). Capitalism and the commons. Theory, Culture & Society, 37(2), 3–30.

Balkin, J. (2004). Digital speech and democratic culture: A theory of freedom of expression for the information society. New York University Law Review., 79(1).

Barlow, J. (1996). A declaration of the independence of cyberspace. Semantic Scholar.

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180.

Cheney-Lippold, J. (2011). A new algorithmic identity. Theory, Culture & Society, 28(6), 164–181.

Cohen, J. E. (2017). The biopolitical public domain: The legal construction of the surveillance economy. Philosophy & Technology, 31(2), 213–233.

Cramer, B. (2020). From liability to accountability: The ethics of citing Section 230 to avoid the obligations of running a social media platform. Journal of Information Policy, 10, 123–150.

Crawford, K. (2015). Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology, & Human Values, 41(1), 77–92.

Gillespie, T. (2015). Platforms intervene. Social Media + Society, 1(1).

Gillespie, T. (2018). Platforms are not intermediaries. The Georgetown Technology Law Review, 2(2), 198+.

Hand, M., & Sandywell, B. (2002). E-topia as cosmopolis or citadel. Theory, Culture & Society, 19(1–2), 197–225.

Kreimer, S. F. (2006). Censorship by proxy: The First Amendment, internet intermediaries, and the problem of the weakest link. University of Pennsylvania Law Review, 155(1), 11–101.

Lehr, W., Clark, D. D., Bauer, S., Berger, A., & Richter, P. (2019). Whither the public internet? Journal of Information Policy, 9, 1–42.

Maddox, J., & Malson, J. (2020). Guidelines without lines, communities without borders: The marketplace of ideas and digital manifest destiny in social media platform policies. Social Media + Society, 6(2).

Nolin, J. M. (2010). Speedism, boxism and markism: Three ideologies of the internet. First Monday, 15(10).

Raymond, M. (2013). Puncturing the myth of the internet as a commons. Georgetown Journal of International Affairs, 53–64.

Wilson, S. L. (2015). How to control the Internet: Comparative political implications of the internet’s engineering. First Monday, 20(2).