Sharing an interview I did with KPMG in my role as Advisory CIO for a Bermuda-based financial services organization
The Internet is becoming critical infrastructure for Africa. Across the continent, Africans increasingly depend on the Internet to communicate, socialize, and most importantly to conduct their day-to-day jobs and activities. A major outage of the Internet infrastructure is a prevailing fear for network operators, governments and users alike. But, has Africa secured its Internet Infrastructure?
I just finished participating in a panel discussion titled ‘Internet Infrastructure Security in Africa’ at the African Internet Summit (AIS) in Gaborone, Botswana. We sought to identify the major security challenges facing the Internet infrastructure driving Africa’s digital economies. This panel is a precursor to my participation in developing guidelines that will serve African countries in their efforts to protect their Internet Infrastructure from present and future threats.
My speaking points were specifically about existing mechanisms to combat various threats, and the cooperation between key stakeholders to defend their organizations/countries from and ever changing threat landscape. I also described what types of structures were needed at the national and regional level based on best practices from around the world.
ICT Pulse: Niel, it has been two years since our last Expert Insights Series, give us a quick recap of what have been the most prevalent incidents in Barbados and/or in the Caribbean region since 2014?
Niel Harper: Over the last 2 years, various government web sites in Barbados have been compromised and defaced by hackers. Websites included the Barbados Government Information Service (BGIS), Barbados Stock Exchange (BSE), Barbados Revenue Authority (BRA), Royal Barbados Police Force, and the Barbados Supreme Court, to name a few. Private websites such as the Barbados Advocate were hacked as well. There are still no data protection laws in the country, so due to absence of mandatory breach notifications, the few reported incidents are only the tip of the iceberg.
The prevalence of ATM skimming attacks have also increased. However, because the marketplace is dominated by mostly Canadian banks, Sarbanes-Oxley regulatory requirements have led to stronger controls, and many of the skimming attacks have resulted in arrests.
In the wider Caribbean, there have been similar trends of government websites being compromised. A number of organizations in St. Vincent, Grenada, St. Kitts & Nevis and other countries have been subject to malicious online attacks. One of the major commonalities across the region is that organizations with limited resources and untrained personnel have been the targets of successful attacks. This is a key reason why capacity building is critical to improving the region’s overall cyber response capabilities.
ICTP: How has the threat landscape changed over the past two years? Are there any particular areas of concern that you have for Caribbean organizations?
NH: The smartphone footprint continues to grow and with it the attack surface of mobile devices. That being said, many device manufacturers are focusing their efforts on enhanced security as a product differentiator. Still, end user education is necessary as an additional layer of protection against malicious threats.
Given the increased hardening of operating systems and applications, attackers are focusing on areas lower down the ‘stack’ such as BIOS, firmware, and graphics chipsets. Controls such as boot security, trusted execution, and active memory protecting are making these attacks more difficult, but I expect these types of threat vectors to increase.
Newer technologies such as IoT (Internet of Things), M2M (machine-to-machine) communication, Network Functions Virtualization (NFV), and Software Defined Networks (SDN) are growing in terms of their deployment base. But this also introduces significant challenges in terms of security: single points of failure, open source software, and complexity. The fact that commonly used items such as televisions, refrigerators, and even automobiles, are now accessible through the Internet has vastly changed the threat landscape, and should force manufacturers and end users alike to focus more on cybersecurity.
The explosion of cloud computing, the increasing popularity of crypto-currencies, and the emergence of mobile payments (e.g. Apple Pay, Google Wallet, etc.) are also areas for concern with regard to an expanding threat surface.
All of these areas are of particular concerns for Caribbean organizations, especially those who are seeking to be on the cutting edge […]
The entire interview can be found on the ICT Pulse website at: http://bit.ly/1T9iMQv
The legal status of domain names is one of the most hotly debated topics with regards to evolving property rights and how they should be applied to technological and intellectual property ‘innovations’ in cyberspace. At present, there are two opposing factions on this topic: On one hand, there are those who maintain that domain names should be considered as contracts for services, which originate from the contractual agreement between the registrant and the registrar. On the other hand, we have the parties who contend that domain names are intangible property rights that reside with the domain name holder.
As the law has evolved, property has been defined as “an abstract right or legally constructed relationship among people with respect to things” or “a bundle of rights, powers, privileges and immunities that define one’s relationship to a resource.” These theories have been beneficial more so for normal property rights, but law courts have found it quite challenging when attempting to determine how these concepts apply to domain names.
In this theme report, I will discuss service contract rights and the ‘bundle of rights’ property theory, as well as examine case law in a number of jurisdictions, and present an argument for why domain names should be considered as ‘property rights’.
Domain Names as Contracts for Service
A number of courts have categorized domain names as contracts for service. This in itself is not incorrect, as domain names are transferred to an individual through a contractual agreement between them and the domain name registrar. The role of the registrar is to provide a functional mapping and translation between the domain name and an IP address. The registrant maintains their right to the domain name as long as they pay the associated fee to the registrar and ensure that the domain name is not utilized in bad faith or infringes on the intellectual property of others.
An analogy has been made between domain names and telephone numbers, accompanied by an argument that both domain names and telephone numbers are allocated and ultimately managed by either a registrar or a telephone company, and as such should be recognized as a contract for use and services. Hence, a person who registers a domain name or is assigned a telephone number is simply the contractual holder of that resource and does not become its owner. Ownership remains with the registrar or phone company.
Dorer v. Arel was the first litmus test of the theory that domain names form contracts for service, and that owners have no property rights to them […]
The full article can be found on the Circle ID website at: https://goo.gl/VkJsRb
There is no doubt that BEREC’s performance to date has been generally satisfactory. It has so far fulfilled its functions in a commendable manner, most notably with regards to Article 7/7a procedures, in addition to its contributions to the dialogue on international roaming and net neutrality. It has federated the NRAs in a way that its predecessor failed to: it has compelled them to be more accountable to themselves and to consumers. It has enabled further harmonization and strengthened interactions between the Member States and the EU institutions. It can be said that BEREC’s uniqueness is based on two elements: On the one hand, it is a body uniting highly skilled professionals who perform their tasks independently from any public or private entity. On the other hand, BEREC comprises representatives of different Member States and allows for regular exchange and deliberation between them cascading the results of these processes to the European level.
BEREC’s independence, while imperfect, has proven to be a laudable feature of the organization. Its legal foundation (the Framework Directive) provides measures to ensure separation of powers and prevent unnecessary political or private sector capture. The mixed funding model in place serves to curtail any attempts by the providers of the body’s financing to obstruct the effectiveness of its activities in delivering trans-national or pan-European services. However, this is not to say that the independence of BEREC concerning the individual NRAs doesn’t require improvements, especially towards the goal of fashioning an overarching European groupthink that overrides the national interests of the constituent NRAs.
The current organizational structure from the technical to the decision-making level provides balance between stability and flexibility. It also leaves room for the negotiations to take place at different levels considering all views in an efficient manner. The EWGs have improved their performance and work in a more professional manner. In the last years, the quality of the reports has been enhanced at the same time that the deadlines are met in the practical totality of the cases. However, rules or guidelines for the EWG work may also be useful for the better functioning of BEREC.
BEREC’s lack of decision-making/enforcement powers can be a double-edged sword. On the one hand, it manifests as a weakness in cases where NRAs choose to reject opinions from BEREC, and pursue undertakings that run counter to the strengthening of the single market. On the other hand, it can serve as a balancing influence as it pertains to the regulatory powers of the Commission and the national regulators. Fortunately, BEREC has had a more balanced record whereby it has taken on several opinions that support the draft decisions of NRAs, and both the Commission and the national regulators have largely agreed with the opinions of BEREC in instances where there was divergence.
Clarity around its accountability continues to be a challenge for BEREC. The body was formed to provide expert opinions on relevant topics, define priorities and advise the EU institutions regarding the harmonization of the single market. It is of critical importance that BEREC demonstrate greater accountability for its own objectives. This can be achieved by documenting its commitments or tactical goals for each coming year, and then through reporting on its achievements to EU institutions at the close of the year.
Models of regulatory governance vary in the level of discretion granted to regulators. This determines the level of transparency required to reassure stakeholders and build legitimacy around regulatory decisions. European citizens and residents have very strong beliefs about the right to access information related to their political and legal institutions. Additionally, the Commission has been vigorously promoting open data and generating value through the re-use of a specific type of data – public sector information. Simply put, BEREC needs to demonstrate their commitment to openness and transparency to build greater trust and legitimacy among its stakeholders. There isn’t much more to it.
The ultimate success of the EU single market depends on the existence of a body that can effectively influence outcomes in national markets and begin to erode the pervasive ‘national’ market approach of Member States. The failure of the ERG is one of the main reasons why the European e-communications market remained a patchwork quilt of national markets for some time. BEREC has many of the elements to become a successful force in coordinating national approaches and bringing consistency through decentralized regulation. However, it could also become a major obstacle in harmonization policy of the Commission by becoming a center for European regulation that protects and lobbies national interests. The verdict is still out on which way the pendulum will swing.
The full academic paper can be found here: http://bit.ly/3mzDGLU
The traditional path of multilateralism is usually thought of as very much based on interactions and agreements between nation states. This political form of organization is a closed system encompassing multiple governments, and there are strong barriers to enter or participate in the system. While it is premised on creating a binding effect (consensus), discouraging unilateralism, and giving a voice and voting authority to smaller powers, this is not always the case in multilateral arrangements. I will use the United Nations (UN) as a point of reference to validate my point.
In the UN, the objective is that irrespective of the differences in territorial size, population size, military power or economic strength, all states have the same legal personality, although it is universally acknowledged that this principle does not correspond to the reality. And while a ‘one state, one vote’ rule does exists within the UN General Assembly, the Security Council (the most powerful body within the UN) has five permanent members who all hold the power to veto resolutions brought by the other members. And while there is a revolving door in terms of non-permanent members, there are at least 60 members who have never held a seat on the Security Council. Inequality is very much evident in this arrangement.
However, although systems such as the UN remain multilateral from the perspective that only states are members of most of its formal bodies, civil society does participates in a consultative role. Furthermore, civil society organizations have performed important roles such as mobilizing support for UN policies, gathering information, offering advice and drafting treaties. In a number of conventions, NGOs have not only offered expert advice, but have also drafted treaty language. So, in effect, the system is not entirely closed.
That being said, this traditional path of multilateralism is still not well suited for maintaining an open, resilient, and secure Internet, mostly due to the fact that it is not informed by broad participation of various interested stakeholders — including businesses, technical communities, civil society, academia — through a consensus, bottom-up process of policymaking.
Still, to be fair to governments, there are references in the Geneva principles as well as the Tunis Agenda that recognize and affirm that a multilateral process should exist apart from the multistakeholder approach with regards to mapping out the future roadmap on Internet governance. A strong argument can also be made that the Internet governance ecosystem is not entirely sensitive to the cultures and national interests of nations, and that the current framework of Internet governance is not particularly effective in responding to some of the core and strategic concerns of nation states (cyber crime, cyber terrorism, child online protection, protection of critical infrastructure, taxation, etc.).
So what we need is continued evolution of Internet governance mechanisms to a point where there is successful interplay between multilateralism and multistakeholderism, and which substantially improves the degree to which multilateralism can in practice (and not just in theory) become more representative, democratic, transparent and accountable – and whereby its contributions would benefit the entire Internet ecosystem.
That being said, I think that we’re witnessing several improvements in terms of how multilateral and multistakeholder institutions are coexisting and cooperating to work on Internet governance issues without significant tensions, and without undermining the Internet and its vast potential.
For example, the WSIS+10 High Level Event, which was organized by predominantly multilateral agencies (ITU, UNESCO, UNCTAD, and UNDP) to review the progress made in the implementation of the outcomes of WSIS. The preparatory process and the outcome documents can be viewed as positive developments, and can be recognized as examples of how multilateral institutions are opening to multistakeholder participation, especially given that member states have increasingly acknowledged the critical roles that other stakeholders have to play. See WSIS+10 outcome documents here: <http://www.itu.int/net/wsis/documents/HLE.html>.
There was also incremental progress at the ITU’s Plenipotentiary Conference, which took place last year. At the conference, Member States agreed to establish mechanisms to enable multistakeholder input to the government-only Council Working Group (CWG) on International Internet Public Policy. While it would have been preferable to open the CWG entirely to multi-stakeholder participation, these advances are still commendable.
Another organization that has shown great promise in terms of the fusion of multilateralism and multistakeholderism is the OECD. The organization has a number of mechanisms in place to assist governments in developing policies to stimulate the digital economy. The Committee for Information Computer and Communication Policy (ICCP) has instituted a framework for participation of non-governmental actors in its work. The multi-stakeholder Internet Technical Advisory Committee contributes to the work of the OECD Committee on Digital Economy Policy (CDEP) and its specific working parties such as the Working Party on Communications and Infrastructure Services Policy (CISP) and the Working Party on Security and Privacy in the Digital Economy (WPSDE).
The recently concluded Internet Governance Forum (IGF) in João Pessoa, Brazil also had quite a large number of government delegates. See Participant List here: <http://www.intgovforum.org/cms/igf2015-participantslist>. This demonstrates that more state actors are realizing the importance of the multistakeholder process, and seeking to embed themselves deeper in the activities of the IGF. Interestingly enough, the Multistakeholder Advisory Group (MAG), the steering committee for the IGF, is comprised of a number of representatives from national governments.
While realizing the benefits of the Internet is not dependent on government, there is definitely a role for governments in the governance of the Internet, and this role is evolving, just as multistakeholderism continues to reshape and reform itself. Hopefully, the transition of the IANA function will be an optimal paradigm shift towards an Internet governance approach that fully embraces all stakeholder groups (and not just governments, but civil society and end users as well).
With recent news pertaining to the details of the proposed UK Investigatory Powers Bill, I am now more convinced than ever that governments are schizophrenic when in comes to online privacy. This new bill quickly follows the French government’s approval of ‘intelligence’ legislation which the United Nations Council on Human Rights deemed as “excessively broad” in terms of surveillance powers.
In an effort to quell public outcry with regards to rampant, unregulated data collection by corporations, governments pass stronger data protection and privacy laws. Yet, at the same time they pass intelligence legislation giving themselves greater authority and the ways and means to collect more data about individuals. So they’re essentially granting themselves the same powers they created privacy and data protection legislation to prevent corporations from abusing. But let’s take it a step further and look at how they use these powers.
The normal process for wiretaps is as follows:
1. Obtain evidence of wrongdoing or intent to commit a crime
2. Provide judge with said evidence and seek authorization to monitor phones
3. Obtain explicit approval from a judge and commence wiretapping exercise
The new process for online surveillance:
1. Write laws that allow you to collect information on everyone just in case they do something wrong in the future
With protection of human rights as the underlying principle, intrusive surveillance to this degree is by no means proportionate. It constitutes a total overreach by law enforcement, invariably violating the right to private life and correspondence, and is unlikely to be ‘necessary in a democratic society’. But then we have these individuals who say, “If I am not doing anything wrong, why does it matter if the government collects data related to my landline calls, mobile calls, VOIP calls, emails, instant messages, SMS, social media posts, and photo uploads?” My advice to them is don’t be so quick to give up your rights.
Some of the sensitive facts those records uncover becomes glaringly obvious after some contemplation: Who has called a drug addiction counselor, a suicide hotline, a brothel, the HIV/AIDS information center, a divorce lawyer, their mistress or an abortion clinic? Which websites are people frequenting? What type of porn do they watch? What religious and political groups are they involved in?
Some facts are less straightforward to deduce. Because the metadata from your cellphone calls typically includes information about the proximity to cell towers, this data creates a virtual map of where you spend your time, who you spend it with, and what you’re doing.
So many people believe strongly in democracy and regularly harp against authoritarian and despotic regimes. But in a democracy, it is essential that the vast majority of power reside with the masses. With the emergence of almost limitless data storage capabilities and powerful data analytics, information is quickly becoming the currency of power. As the ability of the government to collect and store vast amounts of data increases, so does its power. This systematic centralization and strengthening of power is chilling, and not so much for its impact on an individual basis, but more so for its wide-reaching effects on the organization of social and political activity.
The Internet has largely transformed the manner in which we build relationships, communicate, and innovate. It has also changed how we define and build wealth. For example, look at how Bitcoin is disrupting the existing financial system. Also take into consideration the fact that many successful tech companies have little to no physical assets or property — their market value is based on their technology platforms and the data held within them. These changes are necessitating a move away from traditional approaches to public policy and regulation, from human rights to intellectual property to national security.
As such, I do not believe that traditional policy and regulatory frameworks are able to address the Internet’s public policy concerns in a satisfactory way. Traditional frameworks were generally led by governments and focused on the underlying telecommunications infrastructure. WSIS made it clear that Internet governance, regulation, and policy are not restricted to the activities of governments and that many different types of stakeholders have a role in defining and carrying out Internet policy and regulation activities. Thus emerged new terminology such as ‘multistakerholderism’.
The activities related to Internet regulation and public policy are varied in nature, and include such areas as open standards development, deployment and operation of critical infrastructure, development, sector regulation, and legislation (data protection, intellectual property, cybersecurity, etc.), and several others. While governments play a role in some of these areas, there are a number of other stakeholders that address the various policy and regulatory concerns associated with the Internet.
What is unique about the Internet is that innovation ‘occurs at the edges’. Hence, the value is no longer in the network (as with traditional markets and associated policy and regulatory responses), but in devices, applications, and services. Unfortunately, policy and regulation have been slow in catching up to this change in market structure. So the key message here is that technology constantly changes, and policies and regulations that are premised on a set of technological “facts” are rendered ineffective when those facts change.
When I think about it, there are a couple of reasons why legislation may be needed in response to technological changes:
1. Special regulations may be needed to prohibit, restrict, promote, or coordinate use of an emerging or new technology platform (e.g. IoT, RFID, DPI, etc.).
2. Existing laws may have to be clarified with regards to how they apply to activities, relationships, or processes that have been changed by technology (e.g. data privacy, data collection, online surveillance, etc.).
3. The scope of existing legal rules may be inappropriate in the context of new technologies.
4. Existing legal rules may become obsolete.
Often times, new technologies will have little to no negative or disruptive effects. In other instances, they may only relate to a few of the aforementioned issues. Yet examples of each type of problem can be found in the context of diverse technologies. In some legislative corners, there are calls for technologically neutral drafting as a ways and means to future proofing law. Still, this will not prevent some laws from being ineffective or operating unfairly in light of a constantly changing technology landscape.
I think that a better approach for dealing with ‘law lag’ is to focus on how the legal system holistically addresses technological change. We should examine the respective roles that administrative bodies, national courts, tribunals, law reform bodies, and other entities play in helping the law adapt to rapid technological change. A small example is the Queen’s Bench Division Technology and Construction Court in the UK, which deals principally with technology and construction disputes.
References to ‘law lag’ can often times be used as a convenient excuse to avoid serious discourse around the regulation of science and technology. For example, those that scream, “The Internet cannot be regulated” are conveying a sense of anarchy and implying that the Internet evolves all on its on, and changes too quickly for policy or regulation to be applicable. This is a questionable assumption in my opinion. There are also some cases where ‘technology lag’ can be observed. For example, the broad deployment of renewable energy technologies has been stymied due to policies and regulations that protect the entrenched fossil fuel-based systems. Another area is where technological improvements in automotive design have been driven by litigation and the advocacy of the legal community and consumer advocates as opposed to engineers.
This is not to say that deficiencies in the law can’t be corrected by an amendment to existing legislation. The concern is more about the timeliness and overall quality / effectiveness of amendments. An amendment for one has to be fit for purpose, and not fix one issue while causing problems in other areas. One also has to look at legal flexibility and determine whether or not to incorporate new rules into common law as opposed to implementing more rigid statutory laws. Access to specialist skills and information is also key to ensuring that laws are not created by groups that don’t understand the complex issues at the intersection of technology, policy, and business (and as such negatively impact one or more stakeholders with errant changes to legislation).
From a functional perspective, Bitcoin can be classified as money. It is a valid medium of exchange, as thousands of individuals and businesses exchange bitcoins for their goods and services. Even real estate transactions are being regularly conducted using bitcoins. As a unit of account, it also fares pretty well. Common goods are quoted in bitcoins at merchants, and bitcoins are traded against currencies such as the Dollar, Pound Sterling, Euro, and the Yen. Much like gold and silver, Bitcoin is finite, therefore making it an adequate store of value. It’s highly portable, easy to store, hard to steal, and not very easy to confiscate.
As for its standing as “money” for legal purposes, it may appear at the surface that Bitcoin is not quite there yet. Most notably, no country has thus far granted Bitcoin status as legal tender. However, case law in the US has set precedent via SEC v Shavers and USA v Robert M. Faeilla and Charlie Shrem establishing Bitcoin as money. Additionally, both the Bank of England (UK) and the Inland Revenue Service (USA) have reported that Bitcoin fulfills a number of the functions of money, and is therefore a valid method of meeting financial obligations or extinguishing debts. The UK, EU, USA, and Canada all treat Bitcoin as money or income for taxation purposes. And in the USA and Canada, anti-money laundering and terrorist financing regulations are applicable to Bitcoin exchanges. Hence, there is a solid argument that supports Bitcoin as “money” for legal purposes.
Then why are governments so apprehensive about granting Bitcoin legal status as money? There are a number of reasons to explain this situation. For one, there are widespread fears that cryptocurrencies represent a potential risk to the stability of fiat currencies. But fiat currencies are not void of their own set of problems such as economic volatility, currency debasement, and price instability. Secondly, concerns have arisen with regards to the burgeoning underground marketplace where Bitcoin is a popular currency — but illegal activities exist and will continue to do so with or without Bitcoin. We also need to accept that legislation generally lags behind technology, and many lawmakers simply do not understand Bitcoin to begin with. How then can we expect them to adequately regulate it?
What is clear is that money is defined by society. If an extended community approves of something (by means of market forces and under the principles of demand and supply) as a medium of exchange, unit of account, and a store of value – it is money. And since 2009, increasingly larger markets have decided that Bitcoin solves a number of problems intrinsic to fiat currencies. For example, blockchain technology that underlies Bitcoin allows for participants in the financial system to share transactions on a common public ledger, consequently enhancing transparency and building greater trust while substantially driving down the costs of transaction and processing. As such, it has the potential to enable broad-based changes in banking processes. So instead of being led by fear, governments need to respond appropriately by embracing cryptocurrencies and focusing more attention on clarifying the legal and regulatory landscape.
The full academic paper can be found here: http://bit.ly/34r7mVc
The Internet is by all means a technological phenomenon. It is an open, accessible, and user-centric platform for human self-realization. It allows us to seamlessly and dependably connect with one another; it enables freedom of expression; it allows individuals to create, share, and collaborate. At the core of the Internet’s existence and continued evolution are its open, decentralized nature, resilience, and the ability to innovate at the edges. The Internet’s open technological standards are what underpin its rapid growth; and they are of critical importance to its continued vitality and utility. Open standards are what permit an employee connected to a corporate network in Brisbane to communicate with a villager accessing the Internet through a wireless community network in Sao Paulo.
The Internet Engineering Task Force (IETF) is the primary entity responsible for establishing the Internet’s open standards and best practices – standards for networking protocols, infrastructure, software, operations, maintenance, and security.
The IETF is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It produces standards and best practices that influence the way people design, use, and manage specific aspects and segments of the Internet. Participants volunteer their time to develop and refine protocols that are useful to organizations, manufacturers, and vendors who utilize the Internet. The IETF is open to any individual who wants to participate. The actual technical work of the IETF is done in its working groups, which are organized by topic into several areas (e.g., routing, transport, security, etc.). Individuals become involved by subscribing to one of the IETF working group mailing lists and offering technically competent input on a standard being developed by that group.
The open, democratic, and merit-based nature of this structure allows thousands of people from around the world to contribute to the IETF’s work. As many as 1400 individuals from more than 50 countries participate in each of the meetings of the IETF and its working groups. Many persons do not attend the meetings in person, but are involved through online collaboration tools or via the mailing lists. Anyone on a working group mailing list can propose a new standard or best practice. If the proposer can generate sufficient support from others, the working group may decide to take on development. A well-defined review process assures that the final document follows sound network engineering principles, meets security requirements, and is consistent with other Internet processes.
The IETF and each of its working groups make all decisions by consensus. Final accepted standards are based on the combined engineering judgment of participants and real world experience in deploying, operating, and administering IETF specifications. The great majority of work performed by the IETF and its working groups is done by email. Three international meetings are held each year, each lasting a full week. These provide opportunities for participants to meet one another face-to-face, to network, and to generate support for initiating new standards or best practices.