Citizen Accountability in a time of Facebook

This article is written by John Gaventa and first published on the IDS opinion blog here

“Develop the social infrastructure to give people the power to build a global community that works for all of us.”

This is what Facebook’s Chief Executive Mark Zuckerberg said in his open letter to the Facebook community at the beginning of this year. The statement of intent from the social media giant is a bold one, and one worth reflecting on for those of us working on issues of accountability and empowerment. For me it raises a couple of important questions. How far can or should the likes of Facebook, and other technical innovations that have rapidly evolved over the last ten to fifteen years, connect us all as individuals and engage us with the institutions that govern us and help us hold them to account? And how does this happen in a world where the opportunities and spaces to voice dissent and protest are shrinking, and where questions about ‘whose voice matters’ are further confused and complicated by ‘whose voice is real or authentic’ in this digital age?

The promise of tech

They were also questions that arose at the recent Making All Voices Count (MAVC is a programme funded by DFID, Omidyar, SIDA and USAID) Policy and Practice Dialogue, Appropriating Technology for Accountability. And as I reflected in my speech at that event, these questions around transforming and improving accountability are by no means new. However, the context in which we ask them is constantly changing – from the Gutenberg press which took printing out of the hands of priests and put it into the hands of the people over five hundred years ago, to more recently, the advent of the personal computer, the internet (1990), Facebook (2004), Twitter (2006) and What’s App (2011). These technologies have revolutionised the way people access information, how they communicate with each other, as well as institutions and public figures, and how they respond to and organise around particular issues.

There’s no denying the positive force of these technologies in helping people to speak out and to amplify voices in an attempt to hold powerful institutions and individuals to account. This was evident in a number of examples shared over the course of the two-day event – the Black Sash human rights organisation in South Africa who are piloting a project encouraging citizen-led monitoring of local public services; This Is My Backyard (TIMBY) which has highlighted millions of dollars of misspent county social development funds and unearthed a 10.5 million dollar scandal in Liberia; Game my Village which built new relationships of trust and transparency between government officials and villagers in Indonesia and Oil Journey which communicated with over 300,000 citizens in Accra in Ghana about how oil revenues were being spent on community development projects.

Tech and closing civil society space

Yet at the same time there is no escaping the fact that these technical innovations designed to empower are operating in a global environment where civil society space is shrinking. The current situation has been labelled by Civicus as ‘a Global Civic Space Emergency’ in their 2017 State of Civil Society Report. The report highlights that:

  • Only three per cent of the global population live in countries where civic space is completely open.
  • In 106 countries, over half of all countries, civil space is seriously constrained.
  • This problem affects all regions of the world including the UK where civic space has narrowed in the past year.

Indeed, the evidence suggests that technologies are being used to close spaces as much as to open them, to surveil and monitor, as much as to connect and engage. Examples extend from malware being used to monitor the activities of advocacy and campaigning groups (highlighted in this open letter from Mexican civil society of the Open Government Partnership (pdf) and this IDS Bulletin article The Dark Side of Digital Politics) to state-supported trolling. For those gathered at the conference, there was a sense that the excitement and optimism that had characterised the work of MAVC and other similar programmes exploring accountability and the role of technologies in creating more open, inclusive and accountable societies only a few years ago was being replaced by a growing pessimism.

A digital level playing field?

The conundrums and paradoxes associated with technology and its role in promoting accountability is also evident in relation to global governance. On the one hand technology has enabled voice and responsive governance, but on the other hand the governance of the digital sphere remains in the hands of a powerful few who control the networks they have created. As reported recently in the New York Times, Google’s market share of search advertising is 88% and Facebook owns 77% of mobile social media traffic.

Digital technologies have created winners and losers, rather than a level playing field. Rather than disrupt, they have often replicated entrenched inequalities and power imbalances within society. Critically, just under half of the world’s population remain offline. Moreover, women are 50 per cent less likely to have access to the internet and a third less likely than men of a similar age, education level and economic status to access their Internet via their phone (World Wide Web Foundation, 2016). Inequalities also exist within the tech industry. A study in the US found that Hispanics, African Americans, and women hold only 8 per cent, 7.4 per cent and 36 per cent of tech sector jobs respectively (US EEOC 2016). Hence, across decision making, usage and application of technologies it is often the voices of the already powerful that are amplified and the voices which have always been marginalised remain unheard.

Within this unequal context, it has also become increasingly hard to distinguish amongst the myriad of information flows and voices between what’s authentic and what’s not. It is not well understood amongst the majority of technology users, how complex and sophisticated algorithms are being used by companies, by governments and by individuals, to control and manipulate what is shared and liked, and ultimately shape public opinion and debate.

While technology has helped achieve amazing things, in itself it cannot create a ‘social infrastructure…that builds a global community that works for us all.’ Politics and power still matter, and it is only when we link these with technology-led accountability initiatives as well more analogue, traditional efforts that of transformative change towards a more inclusive, accountable and open world is possible.

 

Why isn’t Tech for Accountability working in Africa?

Research is shedding light on the problems inherent with adopting technology for accountability initiatives, and providing recommendations for future projects.

In an article published by the South African Institute of International Affairs (SAIIA), Indra de Lanerolle, argues that “it seems that civil society organisations (CSOs) and governments often ‘re-invent the flat tyre’: experimenting with new tools without finding out what has been tried (often unsuccessfully) before. They also do not follow best practices in how to soure, develop and test technologies to ensure that these are ‘fit for purpose’. Decision makers should focus on building an effective innovation ecosystem with better links between technologists and accountability actors in both government and civil society to enable learning from success – and mistakes”.

Recommendations include:

  1. Those with responsibilities in creating the innovation ecosystem, including funders, should focus  on building a supportive innovation ecosystem.
  2. Funders should shift their focus from supporting short-term pilots to building institutions capable  of success over time, and invest in strengthening links between initiatives and disseminating  learning resources across the continent.
  3. Those who are leading and managing innovation initiatives – in government and CSOs – should  focus on getting better and smarter at managing the innovation cycle.
  4. Research suggests the following ‘rules of thumb’ will lead to better outcomes: acknowledge what  you do not know, think twice before building a new tool, get a second opinion, test technologies in  the field, plan for failure, budget to iterate, and share what you learn.

To find out more and read the full article: Why isn’t tech for accountability working in Africa?

 

Technology can boost active citizenship – if it’s chosen well

Indra de Lanerolle, University of the Witwatersrand

Civic technology initiatives are on the rise. They are using new information and communication technologies to improve transparency, accountability and governance – faster and more cheaply than before.

In Taiwan, for instance, tech activists have built online databases to track political contributions and create channels for public participation in parliamentary debates. In South Africa, anti-corruption organisation Corruption Watch has used online and mobile platforms to gather public votes for Public Protector candidates.

But research I recently completed with partners in Africa and Europe suggests that few of these organisations may be choosing the right technological tools to make their initiatives work.

We interviewed people in Kenya and South Africa who are responsible for choosing technologies when implementing transparency and accountability initiatives. In many cases, they’re not choosing their tech well. They often only recognised in retrospect how important their technology choices were. Most would have chosen differently if they were put in the same position again.

Our findings challenge a common mantra which holds that technological failures are usually caused by people or strategies rather than technologies. It’s certainly true that human agency matters. However powerful technologies may seem, choices are made by people – not the machines they invent. But our research supports the idea that technology isn’t neutral. It suggests that sometimes the problem really is the tech.

Code is law

This isn’t a new discovery. As the technology historian Melvin Kranzberg put it:

Technology is neither good nor bad; nor is it neutral.

US legal professor Lawrence Lessig made a similar case when he argued that “Code is Law”.

Lessig pointed out that software – along with laws, social norms and markets —- can regulate individual and social behaviour. Laws can make it compulsory to use a seat belt. But car design can make it difficult or impossible to start a car without a seat belt on.

Our study examined initiatives with a wide array of purposes. Some focused on mobile or online corruption reporting, others on public service monitoring, open government data publishing, complaints systems or public data mapping and budget tracking.

They also used a range of different technological tools. These included “off-the-shelf” software; open-source software developed within the civic tech community; bespoke software created specifically for the initiatives; and popular social media platforms.

Less than one-quarter of the organisations were happy with the tools they’d chosen. They often encountered technical issues that made the tool hard to use. Half the organisations we surveyed discovered that their intended users did not use the tools to the extent that they had hoped. This trend was often linked to the tools’ specific attributes.

For instance: if an initiative uses WhatsApp as a channel for citizens to report corruption, the messages will be strongly “end-to-end” encrypted. This security limits the behaviour of governments or other actors if they seek to read those messages. If Facebook Messenger is used instead, content will not be encrypted in the same way. Such decisions could affect the risks users face and influence their willingness to use a particular tool.

Other applications, like YouTube and Vimeo, may differ in their consumption of data. One may be more expensive than the other for users. Organisations will need to consider this when choosing their primary platform.

It’s not always easy to choose between the many available technologies. Differences are not transparent. The effects of those differences and their relevance to an initiative’s aims may be uncertain. Many of the people we spoke to had very limited technical knowledge, experience or skills. This limited their ability to understand the differences between options.

One of the most common frustrations interviewees reported was that the intended users didn’t use the tool they had developed. This uptake failure is not only common in the civic tech fields we examined. It has been noted since at least the 1990s in the worlds of business and development.

Large corporations’ IT departments introduced “change management” techniques in answer to this problem. They changed employees’ work practices to adapt to the introduction of new technologies. In civic tech, the users are rarely employees who can be instructed or even trained. Tech choices need to be adapted for the intended users, not for a structured organisation.

Try before you buy

So what should those working in civic technology do about improving tool selection? From our research, we developed six “rules” for better tool choices. These are:

  • first work out what you don’t know;
  • think twice before building a new tool;
  • get a second opinion;
  • try it before you buy it;
  • plan for failure; and
  • share what you learn.

Possibly the most important of these recommendations is to try or “trial” technologies before making a final selection. This might seem obvious. But it was rarely done in our sample.

Testing in the field is a chance to explore how a specific technology and a specific group of people interact. It often brings issues to the surface that are initially far from obvious. It exposes explicit or implicit assumptions about a technology and its intended users.

Failure can be OK. Silicon Valley’s leading tech organisations fail regularly. But if transparency and accountability initiatives are going to improve their use of technology, they are going to need to learn from this and from other research – and from their own experiences.

The Conversation

Indra de Lanerolle, Visiting Researcher, Network Society Lab, Journalism and Media Programme, University of the Witwatersrand

This article was originally published on The Conversation. Read the original article.