Cyber Threats and Security in the Caribbean 2014 Update

Lock background

[Exert from a recent interview I did with ICT Pulse on the state of cybersecurity in the Caribbean]

ICT Pulse: Niel, give us a quick recap of what were the most prevalent incidents in Barbados and/or in the region in 2013?
Niel Harper: In 2013, Barbados was subjected to attacks from a number of different threat vectors. Several government agencies, financial institutions and private businesses were the focus of targeted website compromises. Some of the techniques used were distributed denial-of-service (DDoS), cross-site scripting (XSS), and SQL injection attacks. There was also a sophisticated ATM skimming campaign that was perpetrated by Eastern Europeans whereby several commercial banks were targeted. I would like to emphasize that these are the known issues. I am pretty certain that the occurrences and complexity of the attacks were much higher, but as there is no legal requirement to report breaches, we will simply never know.

ICTP: Although we are still early in 2014, how is the threat landscape changing? Are there any particular areas of concerns that you have for Caribbean organisations this year?
NH: The Caribbean will be facing the same evolving threat landscape as the rest of the world. For one, as more companies and individuals in the region move their information to the cloud, we should expect to see more focused attacks on corporate and personal data stored on cloud services. Secondly, we will witness greater adoption of advanced persistent threat (APT) techniques to be used in the distribution of traditional malware. There will be growth in the amount of Android and iOS malware, and the burgeoning use of mobile apps for enterprise applications coupled with increased social media usage will broaden the overall attack surface. Given that Windows XP is still widely deployed across enterprises and on personal computers, the platform will become a huge target for attackers as Microsoft ends support activities. And finally, spam is evolving to a point where it is being employed more and more for malware payloads.

ICTP: At the CARICOM level, there appears to be a growing awareness of cybercrime and calls by leaders that something be done. In your opinion, have there been any improvements in the cyber security-associated resources or support structures in Barbados, and/or perhaps regionally? What might still be missing?
NH: The Government of Barbados has signed a MOU with the ITU to setup a Computer Incident Response Team (CIRT) within the framework of the ITU-IMPACT initiative on strengthening cybersecurity. I believe that this step is a signal of intent by government to improve cyber response capabilities in the country. However, my concern is that the accompanying cybersecurity legislation and the necessary capacity building for personnel is not being addressed in as robust a manner as it needs to be. Jamaica has expanded the capabilities of the Communication Forensic and Cybercrime Unit (CFCU) of the Jamaica Constabulary Force, and has also taken steps to establish a national computer security incident response team (CSIRT). A National Cybersecurity Task Force was also established in 2012. However, what have been missing in Jamaica are large-scale cybersecurity awareness programs to educate key at-risk groups. The Caribbean Telecommunications Union (CTU) has also been doing its part to combat cybercrime region-wide, but there are still a plethora of challenges in numerous countries in terms of adequate resources and funding for cyber security response. Moreover, there is little to no coordination among the cybersecurity entities in place across the CARICOM footprint. This prevents the region as a whole from jointly benefitting from crucial activities such as threat information sharing, critical infrastructure protection, active defense and incident preparedness.

ICTP: Are you observing any real evidence of a greater willingness among organisations to take cyber/network security more seriously? How is that awareness (or lack thereof) being manifested?
NH: I think there are generally two types of organizations across the CARICOM region: 1) Organizations that by the very nature of their business and the operational and regulatory requirements they are subject to, are compelled to take cybersecurity serious and invest heavily in a strong control framework to effectively mitigate the risks they are confronted with; and 2) Firms or institutions whose management simply does not recognise or understand the high risks which they are faced with as it pertains to cyber attacks and online crime. So what you now have is a situation where there are a handful of companies with very strong cybersecurity capabilities (mostly financial institutions), and a large amount with weak controls as it relates to cyber resilience. All in all, many Caribbean organizations are still facing serious financial constraints, and budgetary planning cycles regularly do not include large expenditures on things like IT security. Monies are spent on more seemingly important corporate interests, although this will likely change as cyber-risks increasingly pose threats to human, social and economic well being and stability.

ICTP: Are there any key areas businesses should be investing their network security/IT dollars this year?
NH: Businesses need to invest their money in personnel with specialized knowledge and expertise in implementing technical solutions, enhancing operational practices and developing effective cybersecurity-related policies. Governments as well as corporations also need to invest in awareness-raising programs around cybersecurity. And more dollars also have to be spent on research, monitoring, reporting, and coordination of responses to cybersecurity incidents.

The full article and interview can be found at:

Should We Fear the Era of Ubiquitous Computing?

Eye Looking Over Person On Computer

More and more, technology is becoming an integral part of our lives. In a not so distant future, there will be a major convergence of entire industries in the fields of media, consumer electronics, telecommunications, and information technology. But the approaching wave of the technological revolution will affect us more directly, in all aspects of our lives – it is becoming apparent that our future will be characterized by the appearance of computing devices everywhere and anywhere. This concept is known as ubiquitous computing. Ubiquitous computing encompasses a wide range of existing technological platforms and emerging research topics, including distributed systems, ad hoc sensor networks, mobile computing, location-based services, context-aware computing, wireless networks, machine-to-machine (M2M) communication, artificial intelligence, and human-computer interaction.

Case in point, the functionality in smart mobile devices is constantly expanding into previously unthinkable dimensions. Wi-Fi positioning systems (WPS) and GPS can deliver location services as exact as 10 meters in an outdoor setting. Short-range radio interfaces (Bluetooth, ZigBee, Z-Wave, IrDA, etc.) are creating personal area networks (PANs) that better facilitate intrapersonal communication. Mobile phones can now be employed as personal base stations or “access points” that connect a universe of “smart devices”. As it relates to the unbanked or under-banked, technologies such as Near Field Communication (NFC) and Unstructured Supplementary Service Data (USSD) are allowing more individuals and entrepreneurs to participate in the ever-burgeoning mobile economy. From the perspective of e-health and remote patient monitoring, mobile watches (essentially wearable computers) are able to capture a user’s health data and, if necessary, transmit vital statistics back to a medical center via telemetry. In this regard, new qualities and functions are developing due to the proximity to the body that a normal mobile phone could not previously achieve.

Former IBM Chairman Lou Gerstner conceptualized a “post-PC era” where he foresaw, “…a billion people interacting with a million e-businesses through a trillion interconnected intelligent devices.” Smartphones with high-speed data connections, geo-location positioning, and voice recognition capabilities that contextually interact with their environment are the first indicators of this type of ubiquitous virtual network of technical devices and day-to-day objects. Such developments are only now being realized due to rapid advances in technology. For example, semiconductor technology has progressed to a point where complex functions have been miniaturized; so as to obtain drastically reduced form factors — weight, size and energy consumption. The field of “Body Area Networks” has broken new ground whereby the human body can be employed as a transmission channel for low voltage electromagnetic signals. Touch, gesture and other tactile interfaces can initiate individualized communications, and be deployed for user authentication, personalized device configuration, or billing of products and services.

While determining concrete applications for such technologies is a difficult task, the potential for objects to communicate with each other, use available Internet services, and access large online data stores, is simply mind-blowing. The field of ubiquitous computing, and its array of technologies, is creating linkages between the mundane world and everyday objects, between products and services and capital assets, and between e-commerce platforms and supply chain management systems. They are effectually removing human beings as intermediaries between the real and the virtual world. As a result, new business models are emerging that are providing incremental benefits to manufacturers, suppliers, and customers. More importantly, we are seeing the ultimate creation of a plethora of new services such as the persistent personalization or customization of products throughout their entire life cycle.

Despite the obvious social and economic value of ubiquitous computing, particular attention needs to be focused on the issues of security and privacy. The promise of ubiquitous computers is accompanied by a broadening of the traditional Internet problem of “online history” (i.e. the collection of online user activity into big data sets) to include an even more extensive “offline history”. As such, whereas the online surveillance of individuals has been restricted to Internet usage, there will now be no clear delineation between “online” and “offline” data collection in a world of pervasive smart objects. Without a doubt, this will make the resulting data much more valuable. But who will be deriving value from this data (or more so profiting)? Whereas previously a limited profile of an individual could be “built” through data analytics, a much more comprehensive view of this person and his/her daily activities can be obtained in the ubiquitous reality. The question is: Do we really want others to have this much insight into our lives?

In his lecture, “The Ethicist’s and the Lawyer’s New Clothes: The Law and Ethics of Smart Clothes,” Glenn Cohen asserts that the ubiquity of computers threatens to “disrupt the place of refuge.” He warned that even when we switch off our mobile phones, given the prevalence of smart devices, “we squeeze out the space for living a life.” He concludes, “Lots of people have things they want to do and try but wouldn’t if everything was archived.” Should we expect the government and the rule of law to protect us in the ubiquitous world? In the post-Snowden era, we would be foolish to harbor such false expectations. Taking into consideration that most online surveillance activities are undetectable, the odds of anyone securing a legal claim against corporations or governments are slim to none.

In an ideal world, having business responsible for baking robust privacy controls into their products seems to be an optimal solution. But this means that we have to be able to trust the companies (a tall order in my estimation). Most recently, the technical community, in the form of the Internet Engineering Task Force (IETF), has renewed its commitment to building greater security into Internet protocols such as HTTPS and through the use of Transport Layer Sockets (TLS) and Perfect Forward Secrecy (PFS). However, there are significant limitations in the use of technology-only fixes to enhance privacy and security on the Internet (and ubiquitous computing will be no exception). Operational practices, laws, and other similar factors also matter to a large extent. And at the end of the day, no degree of communication security helps you if you do not trust the party you are communicating with or the infrastructure and devices you are using. With all that has happened over the last 24 months in terms of pervasive online surveillance, should we be fearful of what the ubiquitous era holds for us? I wouldn’t necessarily say that I’m afraid, but neither am I brimming with unbridled confidence.

Mind you, I am not by any means a pessimist. There is no doubt that ubiquitous computing will provide vast opportunities for improvement in the realms of our political, commercial, and personal existence. However, the multitude of concerns around governance, standards, integration, interoperability, security, and privacy will necessitate an effective multi-stakeholder approach. The demand will be for unprecedented collaboration among the technical community, academia, business, and government. My fear is that the concerns of the end user will be largely ignored amidst the jostling for position by the others players.

The Age of the Unregulated Algorithm


There can be no doubt that the use of big data analytics holds great promise as it relates to delivering numerous social and economic benefits. From the perspective of science and research, the introduction of new techniques and methodologies based on big data analytics represent a potential quantum leap for how discoveries are realized across scientific fields of endeavor. Case in point, some will argue that scientific modeling is an outdated practice given the uncanny amounts of data available to researchers.

Supercomputers can easily mix, mash and detect complex patterns and relationships that were previously impossible to conceptualize. The delivery of public services is another area where big data applications can yield massive benefits in terms of economic development. If the public sector could sufficiently exploit available datasets (and sadly enough they aren’t doing so presently), they can: 1) enhance transparency in the public sector; 2) deliver more efficient, innovative and customized public services; and 3) facilitate more expedient policy creation and decision making processes.

Still, with these benefits and more to be obtained, a number of critical questions still remain: What are the risks to foundational values arising from big data analytics? What are the potential impacts of big data analytics on fairness and coherence? Are the necessary levels of knowledge and competence available within society to adopt big data analytics? Are current policy frameworks suited to the use of big data analytics in an era in which data is open, re-used, and re-combined in order to bring significant benefits?

As the debate rages on about how do we best take advantage of the gazillion bytes of data that exist, what is clear is that the industry has to reach a point of self-regulation or it will continue to be regulated by those who don’t understand what they’re doing (and society will be disadvantaged significantly more than it will be able to accrue the benefits of big data).

Cue the personal data economy! This shift in direction is about addressing the core issue of privacy through promoting greater awareness around the use of personal data as a resource. Presently, our data is primarily a transaction tool characterized by user identification and consumer purchasing habits. This model empowers (and emboldens) corporations and governments. Even worse, the fears and anxieties around privacy obscure the greater opportunities for improving the lives of individuals.

This new paradigm — the personal data economy — will be driven by more educated end users. The individual will be more powerful because he/she understands data ownership, and how they can optimally share their data but with greater control over different aspects of their anonymity. The result will be that the major features shaping the commercial environment will be “value-creation”, “transparency” and “openness”.

Nokia Corporation: Planning the next bounceback

For a firm that has gone from pulp to televisions to handsets to telecoms equipment, maps, anything seems possible. I believe that the sale of the handset manufacturing division to Microsoft was good, smart business. With its ‘new’ venture, Nokia Solutions & Networks (NSN), the company is focusing on an industry that allows for steady, modest growth and is less volatile from the perspective of fickle customers and quickly changing market demands. Even though competing with the likes of Ericsson, ZTE, Huawei and Alcatel-Lucent is quite the tall order, I believe that Nokia will not be punching out of its weight class. And with the deals that the company has secured in recent times, I predict that all will be well with the Finnish giant. Moreover, I also proffer that Nokia will possibly acquire one of its competitors in the not so distant future. Reinvention is the name of the game, and the Finnish “company that can” has done this quite often in its history.

Regulators See Value in Bitcoin and Other Digital Currencies

Alternative currencies are nothing new (see Liberty Reserve, Berkshares and Ithaca Hours), and are an excellent way to break the inflationary and economically debilitating effects of fiat money. All that is really needed is buy-in and acceptance from a community that’s large enough, and commitment from other networked systems to allow for trading and exchange (the necessity of a government oversight and control framework is debatable). However, given the amount of pressure and negative attention that Bitcoins have received from regulators, central banks and other naysayers around the world — and the pervasiveness of the fiat money system (in terms of the controlling interests), this development concerns me tremendously. Something just isn’t right here!

How Somebody Forced the World’s Internet Traffic Through Belarus and Iceland

The security and resiliency of the Internet is an important topic, and a key area where groups like the IETF, IEEE and W3C are undertaking significant works to ensure that critical Internet infrastructure is protected from large scale cyber attacks. This being said, the risks of a compromise have not been mitigated to tolerable enough levels, and as this article demonstrates, can be somewhat difficult to defend against. Truly disconcerting!

The Real Privacy Problem

As more and more corporations and governments collect and analyze ever increasing amounts of data about our lives and our activities, it’s appealing to react by creating more privacy-related legislation or arrangements that pay individuals for use of their personal data sets. Instead, this article by Evgeny Morozov (the author of The Net Delusion: The Dark Side of Internet Freedom) suggests that what is needed is a civic-minded response, because democracy is at risk.

Locked Up for Linking? US Journalist Faces Prosecution

I have watched with great interest the developments over the course of the last 3-6 months as it pertains to widespread surveillance of Internet users by government agencies. While the NSA surveillance program has been the most publicized, there are reasons to believe that China, India, Pakistan, Russia, Australia and others are conducting similar activities.

One of the things that concerns me most is the double talk coming from most of these countries about “promoting the values and importance of online privacy in the context of basic human rights”. A bad precedent has been set. Let’s just accept this as the reality of things. And unfortunately, this precedent is eating away at some of the basic precepts of Internet growth — trust, openness and user-focused development.

And as you can see from this article, the government actions over the last couple of months has opened a Pandora’s Box in terms of the individual’s right to information, freedom of the press, personal privacy, etc. The implications for the future of the Internet are grave. Let’s just hope that the system is as resilient to political and ideological threats as it is to technological ones.

Google’s apparent U-turn on net neutrality raises definition issues (and questions about content filtering and consumer freedoms)

Given that Google has been one of the staunchest supporters of net neutrality, its recent filing with the FCC came as somewhat of a surprise. In response to a customer’s request that the company amend its terms and conditions for service, Google this week filed a document with the FCC stating that customers of its fibre to the home (FTTH) network were restricted in what type of customer premise equipment or end user applications they could utilize over the network. This move is in direct contradiction of Google’s previous stance that service providers should not be allowed to act as gatekeepers, in essence preventing consumers from enjoying the full range of innovation and choice available through the open Internet. What do you think of this development?

Are Security Professionals Over-Confident in “Defense-in-Depth”?

In late May, NSS Labs released the results of its research on “Correlation of Detection Failures”. In an array of tests which implemented various combinations of layered security technologies, a mere 3% of unique combinations managed to detect all the exploits employed. The published report outlined the testing of the protection effectiveness of next-generation firewalls, intrusion prevention systems, and endpoint protection.

The tests included 37 security products from 24 different vendors and 1,711 exploits. There were 16 IPS, 8 next-generation firewall, and 13 endpoint protection products in the test. Networking products included the Barracuda F900 networking security appliance, Check Point 12600, and the Palo Alto PA5020.

None of the 37 tested products managed to detect all the exploits on their own. Of the 606 combinations possible with two of the security products in the test, only 3 percent of the possibilities detected all the exploits.

The results of these tests raise several concerns about the “holy grail” of defense-in-depth that is so often touted by security professionals. The key question that comes to my mind is: How do enterprises deploy adequate and effective security controls that defend against exploits that are able to circumvent multi-layered defense strategies? Have a look at the report and let me know what you think.