Day 1 Report
ICT landscape and frameworks | 5:00 - 6:30 UTC
👤 Apar Gupta
Session details:
This first session invites participants to explore the regional ICT and digital rights landscape, develop an understanding of key frameworks and structures, and examine the powers and processes that create them. It will also help participants to identify the structures of governance and regulation, as well as recognise the opportunities and challenges that arise across different sectors and countries in South Asia as a result of these regulations.
- Briefly, what are the national and international structures that shape ICT policy and digital rights (standards, treaties, laws, policies etc.);
- What have been the kinds of laws passed in South Asia in relation to ICTs?
- Who are the key stakeholders in ICT governance? What roles do they play in decision making?
- What has been the impact of ICT policies in South Asia on human rights?
- What is a rights-based approach to ICT policy-making? Why is it important?
Key Themes from the presentation:
Platform Governance
In the past decade, there has been an increased number of internet connections in India, with the current record of 970M+ internet connections, following an increase of 300-400M users on Instagram, Youtube and WhatsApp. Access to data per individual is unequal due to socioeconomic status, with some owning multiple connections, and some without any access.
As these platforms are now core public infrastructure for speech, news and association, they also govern the visibility on who is heard, silenced, and how information travels. State power further increasingly exercised their governance through the platforms, via specific tactics like notice-and-takedown, blocking, and shadow banning. However, there is a lack of transparency on these rules and implementations on censorship. Such governance choices affect civil society and social movements including elections, protects, minority speeches and journalistic investigations.
Core Concepts:
- An Intermediary is an entity that carry, host or process third-party content (ISPs, social media, messaging apps, cloud, CDNs) which allows the user to use the internet
- Intermediary Liability (IL) : when and how the intermediary is legally responsible for user-generated content and is liable for prosecution under national law. Due to the volume of data hosted by the third parties, they are not required by law to conduct censorship, but to be a safe harbor.
- Safe Harbor: Based on Section 79 of the IT Act 2000, conditional immunity for intermediaries if they meet due diligence and procedural requirements, such as having grievance officers. The rationale is to protect innovation and freedom for expression while allowing redress for genuine harm. Intermediaries can be passive conduits by receiving orders by the government to remove any illegal content or provide the user data to the authority.
- Design tension: too much immunity -> impunity; too little -> over removal and private censorship. May affect advocacy on civil liberties, protests revolving around vulnerable groups, whistleblowers.
The Indian Framework: From Safe Harbor to Delegated Censorship
- The IT 2000 is a law modelled after UNCITRAL Model Law on Electronic Commerce (1996), responding to the emergence of the internet in 1996. It contains Section 79, in which intermediaries are not liable for third party content, and enjoys safe harbor if they meet due diligence and procedural requirements, which indicates that they are expected to take down content deemed unlawful or illegal on its own without any interference by the government.
- Due to the high number of users, they need to shift to proactive gatekeeper, which requires a higher level of due diligence on traceability (identifying user), automated content moderation, and “significant intermediary duties”.
- After a procedural change on IT Rule 3(1)(d) in 2021, censorship is being delegated to the platforms. Traceability mandate would require a significant intermediary. Eg any intermediary with above 5M users essentially needs to fingerprint all encrypted messages sent with the identity of the person. The companies will store the user data.
- Executive rule-making (blocking content, content takedown rules) has expanded obligations without full parliamentary debate
- The ambit of IL, which was initially for messaging platforms and apps, has been extended to streaming platforms such as Netflix, Amazon Prime, Hulu and online news platforms such as digital newspapers and magazine articles.
- Social media companies began to comply more aggressively to avoid any criminal risk, or protect user rights and risk retaliation, to maintain a market in countries with more restrictive laws
- Result: Legal uncertainty, forum shopping, and a gradual erosion of the original safe harbor bargain
Censorship
- Government orders for censorship are mostly issued in secret, in which the user has no knowledge on which post or information is deemed unlawful, or weak reason. This restricts any opportunities for meaningful judicial or public scrutiny, impacts access to justice and violates their freedom of expression (FOE).
- In India, the freedom of expression (FOE) includes the right to receive information. It is not just an injury to the person affected by the censorship, but to the public interest
- Overboard terms that are being used (decency, public order, national security), enable discretionary and viewpoint-discriminatory takedowns. The laws by the government are usually well defined due to the basis of power on the public, in comparison to a private company. The content moderation terms can also be used for government censorship.
- Platforms tend to over-comply, especially on ‘borderline’ speech, leading to silencing dissent, satire, and investigative journalism.
- Marginalised communities, independent media and opposition voices bear the heaviest burden of opaque content governance, as they are afraid of sharing their opinions
- Lack of transparency and remedy, interests regarding rule of law in the country vs the private company’s interests, with the users’ rights being bargained
Towards Rights-Respecting Platform Governance
- Anchor all platform rules in constitutional standards of legality, necessity, and proportionality
- Clear, narrowly tailored statutes; limit overbroad delegated legislation (IT Rules 2000 - amendment to give the government the power to censor anything that is against the nation.
- Protect and modernise safe harbor: liability only after due process, specific notice
- Mandatory transparency: public dashboards on government orders, detailed takedown notices
- Independent oversight and multi-stakeholder processes (civil society, technologists, affected communities) in drafting rules
- Guiding principle: platforms should be accountable, but not converted into privatised censors to the state
Notes from the discussion:
Government initiatives and regulations, safe harbor framework
What has changed in how the government looks at technology and the shift in the last 7 years or so?
The large growth of telecommunication infrastructure prompts a high interest in regulation on the content and type of technologies. As such, the Indian government has been using the Safe Harbor framework as a principle regulation, whereas there’s a global movement for deregulation to increase more innovation and more private industries to thrive.
The Safe Harbor framework offers the ability of the state to not to bring a lot of regulations to operate your business, but still allows the government to fasten a liability on a private company that is required to obey any content takedown requests. While it may not be the best, it is still important for this framework to be protected.
The Digital India Act plays a large role in pushing for digitisation, and yet there are no clear statutory provisions. There are ‘guardrails’ acting through executive notification but there are no fully fleshed out regulations unlike in the EU. pushing for the digitisation of every information, with ‘guardrails’ through executive notification.
How do we balance states' use of broad-based language to allow regulation of emerging technology and the chilling effect that it produces, if we maintain the ‘Safe Harbor’ framework? (especially to curb Big Tech)
It is important to see and frame the platforms as businesses that serve their primary interest in gathering data for private corporations, and are no longer acting as passive intermediaries. Many problems arise when the platforms have full control on the operations and content distribution.
The issue of broad-based language is due to the larger Rule of Law problem. It has been affected because the constitutional frameworks and the institutions were largely captured by the elites and legalities protests. Even that has been shifted by global populism.
I would double down on the transparency commitment, because at the very least it should allow researchers, academics, CSOs to know more on the basis of the knowledge of public pressure’s outcomes.
Q: What other infrastructure regulations beyond online platforms that we should look out for? And how do we see that impacting rights in terms of the current thinking? What are the trends happening in the region, and what should we prepare in order to push for rights?
It is important to acknowledge that many issues are dominated and steered by technology.
- Licensing and governing telecom and internet providers. In India, the legislation governing telecommunications has changed from the Telegraph Act to the Telecommunications Act, in which the government gets to license who gets to operate the telecom services and especially on running the internet services in the country. While the government is seen as a custodian to the spectrum of the services, this severely serves the government for a number of reasons:
- telecom companies are expected to give shares and equal service to all the networks in the region;
- more censorship mechanisms such as blocking certain websites, shutting down the internet, net-neutrality, and filtering mechanisms;
- regulations on hosting and cloud providers, which impacts users’ digital rights to privacy. There’s a default retention on verifying their identity and where do they connect to.
- Technical obstructions in smartphones, such as certain operating system standards and updates are required for certain apps to work.
- The Biometric system Aadhar, which now requires accessing an application to obtain social benefits such as pensions, government support, and more. One of the many drawbacks is the disparity of access for all citizens especially for those who do not own a device.
Platform regulations and content moderation
Content moderation that was once supported by thousands of workers across the Global South has been dismantled as major platforms cut costs and left a gap for context-sensitive moderation especially when it comes to non-English content. Also in extension it also has effects on information integrity. How do the regulations structure navigate this in a way that is rights-based regulatory models that balance accountability with freedom of expression?
As the platforms have the power to control the visibility of content based on their own interests, it is more difficult to see where the line is drawn, what kind of information they would deliver to the users, with the intent to prolong the usage time. Content moderation is important but it alone does not address or fix the issues as fascism and populism that could contribute to the platforms’ initiatives.
Content moderation systems in the Global South which are used to mitigate harm may cause extreme polarisation by amplifying polarising opinions to get more engagement.
Certain issues could also be degraded or desensitised over time, re: Palestine.
Some platforms like Signal would follow human rights principles to a certain degree. Ultimately, there needs to be a shift of thinking and mindset to distribute on diverse platforms, rather than being tied to one certain platform.
Do you think it is useful to compare the models of state control over these platforms and to information flow particularly from across border (eg. banning accounts of users from other countries from view within this country) to Digital Right Management (DRM) 20 years ago. Media companies pushed tech companies to create and impose DRM tech on everything and recreate the control they had over physical world media in digital spaces, then a decade later became even more controlling than the physical world was. Do you see parallels? We also see strange invasions of digital privacy hidden within tax law. Do you think this kind of sideways attack on digital privacy and other digital rights are harder to fight?
Certain tech industries may have also broken the DRM. Eg Google Books who scanned every publication and went against intellectual property. We need to acknowledge the distinction between business concerns and tech companies, in which DRM is being bundled with cheap data and subscription services.
However, piracy is also on the rise as users are looking for content not offered.
The control which tech companies have on people today is immense. It’s not only a strategy of locking people in their platforms, but it’s also the ability of larger techs investing in smaller start ups and venture capitals, monopolising any new services. There’s a high amount of control among 15-20 companies on our technology, with the current AI companies becoming more concentrated with the perceived value.
Alternative Strategies For a Safe, Accessible and Rights-Affirming Access
Between deregulation by neoliberal privatisation and regulation by the government, what should be the solution to balance the rights-based regulatory model?
The government is expected to serve the public and constitutional interests. Unfortunately, the global trend of playing by the Rule of Law creates a tension in the relationship between the larger government’s interests and individual officials who tend to play towards their own interests, which is conducted without any transparency.
There needs to be a full institutionalised regulatory framework, with regulators, that is insulated and autonomous from the political space so that the expertise can grow and develop their own mandate. That regulatory body of large transnational companies has to be in the public domain for public scrutiny and access, as most large private companies are not necessarily champions of free expression. Similar to the government, it's good to have that tension.
The internet is historically different as there are global efforts to ensure multi-stakeholder governance can be conducted. How is this model adopted for the internet and why does it need to find a place at the national level and what are the challenges in the countries in our regions?
Certain multilateral processes around internet governance are available (e.g. UN IGF) to acknowledge recommendations from civil society and other stakeholders, giving them the opportunity to shape the agenda or provide any feedback on themes and decisions to be made. However, most states would opt for state-centric multilateral approaches due to distrust of civil society.
What do alternative models of ownership, development, and innovation look like - models that build community-owned or decentralised pockets of the internet that challenge extractivist and capitalist business models?
And in contexts where accountability mechanisms are failing because states themselves are perpetrators of digital repression and violence, what strategies or governance frameworks could support these alternatives to remain safe, accessible, and rights-affirming?
- There are workshops done on reimagining the technology and relationship with it, and the various exercises have produced poetry, visual art, and mixed media to reconnect with their humanity and reality beyond the internet.
- Worker cooperatives, feminist-guided platforms and alternative tech platforms can be done, and it is important to have legislative frameworks to shape them. These cooperatives may have issues such as funding support, governance structures and their integration of social functions. As reform initiatives may be decadent and vis-a-vis the current policies and authorities in place. One movement we can learn from is the labour movement, in which we see with the unionisation of gig workers, gig workers' relationship with the platforms.
- There needs to be a range of political activation among people to make them understand their choices politically, in which activists could be more proactive in setting the agenda rather than reacting to how tech governs one’s life. For example, a person who does not own a smart phone can still be aware of how the smartphone can affect their choices.
- We can use aspirational politics, in which it is not present in SA. The public at large isn't aware of how technology affects them in a deep way, unless a politician informs them.
- The tech bubble is also breaking as we learn how certain environmentalists and gender rights movements distribute their message across these platforms against the algorithm.
Access and inclusion | 7:00 - 8:30 UTC
👤 Nandini Chami
Session details:
The focus of this session is to develop participants’ understanding of meaningful access and the regulatory frameworks that enable Internet connectivity. It will also invite participants to examine the digital divide, its effects on marginalised groups, and policies and initiatives designed to promote inclusion. This session will further incorporate regional case studies in the region that illustrate how these issues play out in practice, enabling them to connect theoretical perspectives with lived realities.
- What do we mean by meaningful access? (including discussion of access as a right)
- What are the regulatory and policy frameworks that shape internet connectivity in South Asia? What are the gaps?
- What are the challenges to digital inclusion (including the gender digital divide and the importance of an intersectional approach)?
- What are some initiatives or models that could be useful to address issues of access and inclusion (community networks, universal access funds etc.)?
Key points from the presentation:
Mentimeter:
- Addressing the divides in meaningful access will ensure digital inclusion
- Not sure: We are battling other issues such as poverty, fascism, etc
- South Asian economies have other priorities to address before the quest for AI transformation
- I disagree: Does not think we should look at other priorities as distinct, it can be fought in parallel with each other when these issues are structurally interconnected.
- In South Asia, what is the biggest challenge to effecting digital inclusion?
- Political will, poverty and digital literacy, knowledge
SA is home to 2 billion people, and has emerged as one of the world’s fastest-growing regions during the first two decades of the 21st century demonstrated by GDP growth rates. The second date experienced higher economic growth in Bangladesh and India, with jobs affected by AI development.
Coverage gap: Afghanistan and Pakistan have a coverage gap of over 10%. Usage gap, percentage of the population who live within the footprint of a broadband network but not using internet: 42%
Mobile internet subscribers: the number of unique users that have used internet services on a mobile device that they own or have primary use of at the end of the year
To take note that the connection is available because of the subscription, but not necessarily on actual usage.
South Asia’s mobile use landscape
Gaps in affordability - SA has a huge gap on gender employability.
Gaps in usage - while there is no significant rural-urban gap, but men
Gender divide: Typically, headlines tend to lean towards women’s usage on the internet. Often hear about mobile bans by the local government agencies. Even more than overt gatekeeping, there’s systemic and patriarchal barriers by patriarchal gender norms.
Women use it to speak to family; using cheaper phone thus access is limited; digital skills and time constraints
Young women, aged 18-25, how they navigate household patriarchies to allow their access and use of the internet. The participants of the FGD described the approach as ‘Cinderella’ and playing an angel image - uses social media but rarely posts on it, rarely accepts requests from male users, and tries to balance perceptions and performs how the family should be portrayed.
Online GBV on women and minorities - especially on the rise of synthetic media. Study by Equality Now, 2025. Indian law enforcement agencies describe the process of getting social media companies to remove abusive content (eg ‘revenge porn’) as “opaque, resource-intensive, inconsistent and often ineffective”
Often ends up policing and surveilling women, and suppresses women’s self-expression and sexual expression
Access is NOT equal to digital inclusion on empowering terms
- Alison Gillwald, : The connectivity paradox - the internet is not an automatic enabling pathways to development
- Advanced technologies are layered over underlying foundational infrastructures
- significant disparity between those who have technical and financial resources to use the internet actively and productively, vs ‘passive’, ‘barely’ online users using tiny bits of data communicate intermittently
- R.Heeks: All inclusion is not entirely empowering, ‘adverse digital incorporation’.
- Inclusion in a digital system that enables a more-advantaged group to extract disproportionate value front eh work and resource of the less-advantaged
- The growing pace of digitalisation during the Covid pandemic, has also been associated with a growth with inequality
- understand the relation between digital and inequality that of the digital divide: nations, regions, groups , individuals
- A conceptual model is required to explain how digital inclusion may lead to inequality.
Labour exploitation: the gig economy (drivers, delivery workers) and platformisation of informal employment
The AI sweatshop (in India region) - Data labelers to train AI models. The ghost workers are well-educated workforce, many with STEM university education to an advanced level, but failed to find appropriate jobs in their actual field.
Commodification of data
India - biometric identification programme (Aadhar numbers). The approach is being exported to other regions including Sri Lanka.
The citizen became congruent with the customer, the national population is reborn as a “total Addressable Market”. January 2025 amendment to the Aadhar Act (2016) allows private sector services to be used to promote ease of living. Private innovation delivered through state-funded digital infrastructures empowers the poor and develops the nation. This allows predatory data profiling by the market which has never been enforced before.
- India’s DPI strategy of building health and agricultural data exchanges risks eroding the public value of the social data commons, due to the lack of attention to guardrails in data sharing partnerships.
Representational Injustice
- Stereotypical Word Association Test (SWAT) and Persona-based Scenarios Answering Task (PSAT) used to measure both implicit and explicit caste-based prejudices in LLMs. Continue to reflect entrenched caste stereotypes.
Concluding thoughts: Institutional governance deficits that perpetuate digital exclusion
- neocolonial digital order
- trade, tax and IP regimes entrench digital inequality - doesn’t able certain countries to create their own pathways
- Poverty of the imagination: South needs regenerative AI not more unicorns?
Notes from the discussion:
On Artificial Intelligence and AI transformation
How do you define AI transformation?
Referring to the UN Trade and Development’s definition of AI, as the purpose of the technology, there is a transfer of structure in the economy. In AI, you see an introduction of technology service in the tertiary and private sector, in which there is an increased use of knowledge in its structure.
However, AI transformation doesn’t mean it’s magic as it does not necessarily leap through all the bottlenecks you have. If you are in the situation where you do not have any value added to the AI transformation, or giving up your data, then you would be lowered in the value chain.
With the proliferation of AI is not only the hype (and the illusion of ‘inclusion” - everyone can now ‘write’, everyone now can ‘draw’), but also the possible lack of solidarity that came with it - in AI labour displacement, first instinct is people might not protect one another, but to use the tools to replace our collaborators. We wish away designers, writers, we wish away project coordinators etc. Sure, AI lowers the barrier to working outside your lane, sure, that could mean more overlap between disciplines but individual work is often connected from the whole and when accelerated by automation, only makes the turbulence worse and the course corrections more violent
It is tantalising to see AI, such as the CoPilot revolution to make the work seen as dispensable. However, if we are increasingly seeing the internet as full of AI, chatbots and the transmission of information by way of AI becomes worse, there will be a point for collective response to stop using it. There is an increase in unionisation where people’s jobs are affected by AI as well.
Q: Regarding copyright and content, is it the case of closing the barn door in case the horses have already left, especially when the data has already been stored and used by Big Tech, but the laws become better after?
I have never run into a situation where cognitive work can be extracted from the person, as the mind is not extractable. Generative AI is putting this to the test, but that’s where there is hope to recognise this as work and not as similar to those fictions created by enterprises.
Another strategy is to reclaim copyright laws as content creators, and training standards.
In the case of data, one useful thing is that if we build data models, most businesses want access to continuously learn social relations. This requires continuous data collection, which requires a lot of work. This can be a way of ‘rescuing the horse’.
There is also the debate on what kind of data could be commodified, and what shouldn’t.We should recognise that data is social commerce, but model rights of traditional and indigenous knowledge should be acknowledged and involved. We should work on recognising the boundaries. Collective liaising on our data is a move we can prioritise.
Access to Connection and Data Control
In the previous session, we discussed the slow unionisation especially among gig workers. What pathways do you see to challenge through this divide?
- Due to the rapid development, what we’re fighting is a sort of apathy. People see data as useless or as an inconvenience, and do not mind that their data is taken.
- The challenge is the concentration of wealth that one sees. It is timely that the digital rights group collaborates with the labour rights movement.
- Big Tech controls beyond the economy, as it also controls our agency, desire, and nature. To illustrate the danger of data mining, as it is not only colonising information but also on actual land sites for building data centres.
- Activists cannot stop only at surveillance and privacy, because even if privacy is granted by way of consensual data sharing, once the data is detached from the user, it will be reused for other means.
- Engage with each other in this group and submit proposals
- We also need a global ambassador to talk about the detriment of AI and lack of data privacy. We need a social communist response to this.
Even within this paradigm of the digital divide on who has access, there are varying degrees on what access and usage means. In SA, not many have the basic level of that connectivity. One may be connected but do not understand the complexities on how our data can be exploited. On the other hand, one may not be able to remain connected. We know that corporations' concern is on profitability. Could you please tell us more on the options of those alternatives? Such as responding with community work. You cannot prioritise one over another.
Even if people are not connected, the government has moved many systems online, especially in delivering their services, which could lead to life and death situations. E.g the hospital computer may not accurately recognise the patient’s beneficiary or actual needs. We need to have the right to have a non-tech approach to many foundational services.
Secondly, we have abandoned the agenda of public access. In India, we do have a broadband fibre connection program. But it is not like a water pipe program - how do you make the connection meaningful? How do you make and deliver programs that can benefit many? 15 years ago, we were working on efforts to get connected. However, once mobile connection comes up, we tend to think of the mobile connection as a complete replacement for broadband internet, when these are two different infrastructures.
Because of the epidemic of misinformation, there is a rise in the debate for access, in which we can seize the opportunity to educate and talk about digital citizenship literacy and public access models. Many members of APCs are running this model, such as digital empowerment. We need to have a concerted response to lobby for a policy that can scale up these experiments.
Data and Labour Rights
Some 6-8 years ago IT for change had done a report that had looked at how datafication in ports in India such as Mumbai had tried port workers into amazon warehouse sequence stress bots. Their speed of movement was tracked with GPS, they had to complete tasks in some gamified but deeply punitive and unreasonable way. Amazon warehouse 101, really but in PPP port. To me this kind of example is a great way to show the horrors of data collection. However, I also see that the horror of this work model reflects equally perverse and inhumane work conditions in the informal economy.
This is a way to recognise the importance of Labour rights in countering this. Do you know of campaigns/organisation/initiatives dealing specifically with the labour conditions of datafication and gamified work?
In June 2025, ILO commits to International Standards on Gig Work which promises decent work in the platform economy. In India, because of unions like Indian Federation of App-based Transport Workers (IFAT) and All India Gig Workers Union (AIGWU), a range of parliaments have passed the legislations including one of them which looked at how algorithmic work management work systems are auditable by labour commissioners. These conversations on transparency and accountability on the trackers that we use are happening.
Even in the EU and in the UK, the unions have taken Uber under scrutiny. That’s probably the way forward we can contest.
Certain apps and softwares also track how much time you spend on your devices, including keyboards and mouse. Sometimes software developers do not see themselves being affected by these issues. I wonder if we can create any solidarity through this?
Collective licensing of our data and workers data is something we need more investment in.
There is a popular movement on “why can’t workers work with robots?” and we can comment on that.
Bossware - lots of civil society organising against this. I know that Business and Human Rights Resource Centre and Investor Alliance for Human Rights (not India specific though) has been doing a fair bit of work on this
Q: Some SA countries have passed or in the process of passing data protection laws, do any of them address the challenges on data protection and data rights? If not, what are the ways data protection legislation can address these problems?
If we look at the data protection frameworks emerging in the nation, the problem is it is trying to create a baseline by eliminating personal data to a publicly commodified market. This is not just true of our legislation, but also as the gold standard of EU-Data Privacy Framework (EU-DPF) which does not recognise anonymous data. The fact that if you have alienated data on the basis of free and informed concern, and aggregated on the anonymisation, the basis is that it can work as a Human Rights Free Zone.
The question of harm does not stop with the question of harm erosion. Downstream profiling and downstream capture of public value, the private sector has taken property. Unless you have means of compensatory purposes, how can you reclaim the public value of the data? This does not answer the question of benefit sharing outside of the legislation. What complicates this now is that, especially going by the experience of the African region, the pressure where India and Bangladesh have been negotiating with the US, digital trade becomes a bargain in which the US allows free reign on your market. That is one of the issues where we cannot imagine where we can see resources as data. We are only able to talk about personality data rights.
We are given the illusion to opt out from giving our data for the public and personal good. But there is also the question of having that option and choice to do so. In the concept of inclusion and access, as much as we want everyone to be included, does that include the option for people to meaningfully opt out?
I think if you look at European legislation, especially the Digital Services Act, there is an idea that users cannot lose the right to select services.
This is a fascinating way of thinking about GDPR. Do you have links to specific material on this for diving deeper into this perspective?
- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4123311
- https://itforchange.net/index.php/treating-data-as-commons
Final Thoughts:
Suggests to read the Internet Feminist Report that APC created
- Feminist standpoint that FOE for all cannot stand without FOE for women and marginalised communities. What’s the kind of free speech that we want on the internet? We do not want the liberal idea either, which could lead to amplifying certain speeches into power when we want to have an equal level.
No Comments