Conceptualizing the Internet as a Space


The (Un)Commons Ideology of the Internet

The commons debate encompasses several key questions that attempt to diagnose aspects of the internet as a space and whether it could possibly qualify as a commons. Chief among them are whether everyone has access, whether everyone can contribute, and where any inequities are found. Arvidsson (2019) sees these factors and argues that the internet is a commons because of its transformative, progressive potential by comparing today’s internet to the commons of European feudalism. The historical commons permitted the destruction of the feudal system, allowing society to transform by motivating social progress and mobility among people. The influence of the space on the people and people on the space is echoed by Nolin (2010). His position, that people and the internet are in a cycle of influence exerted upon both parties by each other, reflects Arvidsson’s transformation-focused argument for a commons internet. Ideologically, the argument for a commons internet is reflected by John Perry Barlow, the founding father of the modern cyberlibertarian movement. His idealistic view hinges on the principle that the internet is the ultimate example of a free space: utopian, liberating, and countercultural. Barlow’s 1996 vision valorizes the internet and rails against the government, asserting that the lumbering giant of bureaucracy would never compete with the unbounded potential of the internet. If this ideological foundation is accurate, any spatial metaphor chosen for the internet would have to accommodate such an egalitarian structure.

Other thinkers argue this is reductive and inaccurate, failing to acknowledge the internet’s nuances. Hand and Sandywell (2002) consider the future built by the internet as going one of two ways: cosmopolis (utopian, progressive, and universal) or citadel (dystopian, de-democratizing, isolated). Their blended proposal, technopoiesis, acknowledges the likelihood of the internet contributing to both futures as a transformative agent. Maddox and Malson (2020) disagree with this noncommittal approach. They assert that the internet does not promote freedom; instead, it coopts the concept to colonize users on behalf of the technocratic ruling class. They allege that the First Amendment, despite appearances, is used by internet companies to police interaction and behavior in online spaces, forcing internet participants to behave according to a narrow standard agreed upon by a subset of the space’s occupants. According to Maddox and Malson, the ideological foundation of the internet is exploitation of the space for capitalist gain. The rapid pace of technological advancement and deeply entrenched views on both sides of the commons debate hold both parties at an impasse.

The Role of Algorithms

The debate about algorithms focuses primarily on the role they play on the internet. It is an undeniable fact that they influence the space, but some argue they mold and dictate the environment directly while others suggest they control the users. Tarleton Gillespie, a foremost scholar on algorithmic influence online, characterizes these structures as intervening and interfering. In 2015, he argued that content deletion and account suspension in online spaces betray the motives of algorithms, demonstrating their intention to cultivate specific interactions and behaviors; anything outside the favored boundaries is prohibited and stricken. He follows in 2018 by declaring that the active role algorithms play in content management makes it impossible to call them moderators; he advocates for a classification of algorithms that encompasses the direct role they play in shaping user experience. His space-focused position is strong, but contradicts other scholars who believe algorithms exert control on the user, influencing the environment through its participants. Thinkers like Bucher and Wilson claim algorithms make users their puppets. Bucher’s 2012 piece about the threat of anonymity and invisibility bolsters this idea by explaining the mechanisms of Facebook’s algorithm and its primary function to force constant engagement with the platform or risk fading into oblivion, forgotten by connections online. Bucher makes this claim by drawing on Foucault’s Panopticism and turns it on its head, using today’s culture of performance and oversharing to make her point. Supporting this idea of control through controlees is Cheney-Lipold, who also draws on Foucault to make salient points about people’s algorithmic identities and the implications they have for user control. His argument that users are the control mechanism is built on algorithmic identity. Algorithms conduct extensive data collection and analysis inspired by biopolitics and biopower structures to build user classifications entirely outside user control, reducing people to data points that are easier to manipulate and surveil online. Though the nature of the control mechanism is a topic of debate, scholars agree that algorithms pose a threat due to their power.

The question that follows is how algorithms might be understood for the sake of developing a code of ethics informed by algorithms’ power. Gillespie attempts to classify algorithms as active internet architects requiring a direct response that acknowledges algorithms’ influence. He does not provide a path forward, however, leaving a gap in the debate and failing to seize an opportunity to inform policy and challenge those building the algorithms. Other voices propose a slightly different idea for understanding algorithms’ consequences online. Specifically, Ananny’s idea for a code of ethics is noble and adds value to the discussion by describing guidelines for how algorithms should operate; but, holding a nonhuman entity to human standards, standards which are not uniform even among humans, is limiting. That said, if the internet could have its own norms, it could perhaps be whittled down to a specific spatial metaphor also adhering to similar guidelines. This debate is incomplete without Wilson’s 2015 discussion of algorithmic influence via various methods by which a state might control its population. This jarring proposal clearly relates to the architecture of the internet and is a compelling image for those seeking an illustration of control mechanisms online. Wilson demonstrates nuanced understanding of the internet, its component parts, and the intentions of algorithms. This position open the door for connection between political action and the internet as a potential tool for the exertion of control over a people, calling to mind questions about how to manage this influence and who should be held responsible.

Regulating the Online Space

The debate about how the internet is currently set up for management and regulation encompasses a variety of opinions, several of which rely on the concept of function as the origin for the argument’s framework. The general consensus is that function informs internet management because the consequences of the internet’s actions hinge on intentions toward the user and the space. Lehr et al. (2019) propose the idea that each aspect of internet function should be evaluated separately because while parts come together to create the total environment, subcomponents and capabilities have individual implications that must be understood independently. They propose a differentiation between the aspects of the internet that do and do not require policy intervention, furthering their position that the internet in toto cannot be regulated; instead, regulation it must be accomplished at the subcomponent level. Cohen (2017) opposes this idea and advocates for a sweeping approach of the internet on the basis of the entire internet’s functionality. Noting that the internet is a public domain filled with private, personal information, Cohen claims that disjointed regulation in the online space facilitates abuse of data, and therefore abuse of users who create it. The idea of a biopolitical public domain strengthens this position because Foucault’s theories inform other internet debates, and this lens extends discussion of the internet as a data collection and control entity, as opposed to a benevolent tool for public use. Cohen focuses on the importance of regulating the value creation motives of the entire internet as opposed to the pieces that make this possible, clarifying a feasible approach to effective regulation.

The debate about who should be accountable for regulation immediately follows. The government, internet companies, and users all could play a role, but centralizing responsibility would prove far more effective. Balkin (2004) places this duty on the shoulders of the designers and the legislature. Using freedom of expression as the core of his argument, Balkin argues that the American concept of free speech (e.g., unencumbered by the government) forces the responsibility toward designers of internet spaces because they can and should deliberately structure the internet to facilitate freedom of expression in accordance with requirements set by legislative bodies. Therefore, responsibility moves away from the judiciary and onto legislators because regulation must be proactive. The challenge with this view, however, is the American focus. The United States’ conception of free speech and requirements for fulfilling these conditions are not universal. Contrary to Maddox and Malson’s view of the First Amendment as a tool for internet colonization, Kreimer (2006) points out the risks of the government adopting a distant, passive stance toward internet regulation based on the First Amendment. The First Amendment’s protections are valuable, but Kreimer clarifies that they must be applied directly by the government as opposed to through internet companies or other proxies. Proxy governance reflects McCarthyesque approaches and is inherently weak in response to powerful internet companies and the gravity of issues related to regulating an intangible, rapidly-evolving space. Government attempts, however, are imperfect. Section 230, commonly understood as the precise legal concept that permitted the growth of the internet into its current form, is used by internet companies to avoid accountability for regulating their platforms. The legal statute permits a refusal of culpability for user behavior and content implications, according to Cramer (2020). His position does not provide suggestions for a remedy but complicates assertions that the government must be the sole owner of internet regulation. Cramer’s view does not directly contradict this idea, but does compel one to consider the risks of relying on the legislative system to properly manage the enormity of the internet. The omission of legal writing and documentation in this subfield is clear, and it is a deliberate choice due to the fact that the United States government’s approach to internet regulation is reactive and does not influence the internet’s spatial essence.


Questions remain not only about how to properly conceptualize the internet but also about the internet’s implications for the future. Technocrats hold enormous sway over the shaping of this technology and influence the government, social trends, and norms. These individuals are a conundrum, since they could be part of a positive change or bring about dangerous corruption in their pursuit of uberwealthy status on the backs of internet users. This brings to mind considerations of labor, unionization, and abuse. The internet’s value comes from the data users produce. If users are creating this product for internet companies to sell, should they not be compensated? Should they not have rights and ownership over their data, its use, and its analysis for corporate gains? Consolidation among this pseudo-labor force would create tension with companies upon whom people depend for connection, entertainment, and daily life. These internet companies band together themselves to strengthen their hold over the online space. What is the risk of internet companies consolidating to create a new version of a monopoly, not one that discourages competition but one that simply controls the entirety of the vastness of the internet? How should these entities be treated, and what are feasible responses in a capitalist society that would never accept something as bold as collective ownership or a truly free commons space online?

It is impossible to know where the internet will go in the future, and hypothesizing about its potential may be fruitless since the pace of change is beyond what can be managed by society. Predictions, while helpful, evolve at a rapid pace because each advancement introduces new possibilities and questions about how the internet could, should, or would be occupied and used in the future. One must wonder whether regulation has a place in the conceptual discussion about a spatial metaphor. It is exceedingly delayed and slow, yet the management of the space certainly dictates aspects of the environment. The goal of these debates should be a tangible connection to actionable policy. Theoretical discussion is helpful, but this is a concrete situation that affects multitudes around the globe. There is no room for retrospective musing, and limiting discussion of clear paths forward risks relegating this important debate to academic circles. Despite its faults, this topic and its subfields motivate conversation about who the internet is designed to serve, what its true purpose is, and whether that purpose should be revised. As it stands, the internet cannot be understood or cleanly described with current metaphors and vocabulary. As a society, an immediate recalibration is required as further delays risk permitting continued change without proper comprehension of the implications.


Arvidsson, A. (2019). Capitalism and the commons. Theory, Culture & Society, 37(2), 3–30.

Balkin, J. (2004). Digital speech and democratic culture: A theory of freedom of expression for the information society. New York University Law Review., 79(1).

Barlow, J. (1996). A declaration of the independence of cyberspace. Semantic Scholar.

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180.

Cheney-Lippold, J. (2011). A new algorithmic identity. Theory, Culture & Society, 28(6), 164–181.

Cohen, J. E. (2017). The biopolitical public domain: The legal construction of the surveillance economy. Philosophy & Technology, 31(2), 213–233.

Cramer, B. (2020). From liability to accountability: The ethics of citing Section 230 to avoid the obligations of running a social media platform. Journal of Information Policy, 10, 123–150.

Crawford, K. (2015). Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology, & Human Values, 41(1), 77–92.

Gillespie, T. (2015). Platforms intervene. Social Media + Society, 1(1).

Gillespie, T. (2018). Platforms are not intermediaries. The Georgetown Technology Law Review, 2(2), 198+.

Hand, M., & Sandywell, B. (2002). E-topia as cosmopolis or citadel. Theory, Culture & Society, 19(1–2), 197–225.

Kreimer, S. F. (2006). Censorship by proxy: The First Amendment, internet intermediaries, and the problem of the weakest link. University of Pennsylvania Law Review, 155(1), 11–101.

Lehr, W., Clark, D. D., Bauer, S., Berger, A., & Richter, P. (2019). Whither the public internet? Journal of Information Policy, 9, 1–42.

Maddox, J., & Malson, J. (2020). Guidelines without lines, communities without borders: The marketplace of ideas and digital manifest destiny in social media platform policies. Social Media + Society, 6(2).

Nolin, J. M. (2010). Speedism, boxism and markism: Three ideologies of the internet. First Monday, 15(10).

Raymond, M. (2013). Puncturing the myth of the internet as a commons. Georgetown Journal of International Affairs, 53–64.

Wilson, S. L. (2015). How to control the Internet: Comparative political implications of the internet’s engineering. First Monday, 20(2).

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store