Thursday, October 3, 2019

Are we safe?


I think it is necessary to define correctly what we, the vast majority of people, understand in regard "to feel safe" and that we understand or intend to in regard "to be safe".
To begin, it is not the same to feel safe than to be safe. Usually the first is a very plausible state of being reached, while the second is, depending on the level of depth, practically a utopia.



On the first: "to feel safe"

Most people normally experience an acceptable degree of security on a day-to-day basis. Both physically, which is the most notorious and worrying when it is in danger, as well as in finer aspects that can range from the security of our interactions or communications in a broad sense, as well as to that of our information, the one that identifies us or the one that it does not identify us directly but that it belongs to us because we have generated it and that of course, in general it can be used for, ultimately and after more or less complex processing, identify us.

The problem that arises with the advancement, deployment and massive use of ICTs is that the vectors of exposure to potential vulnerabilities in security are increasing explosively and became practically impossible to control for most of us.

So, while in the past technology was part of our lives, today our lives go through technology and more creepy, it depends on it in many ways. That change that looks like a mere pun, is far from it. It is the indication of an irreversible modification in the ways in which society communicates, in the way we interact and in the way we perceive security and safe.

In the past, there was a maxim that said that "security is the enemy of comfort" and speaks of the fact that as the security levels of any system increase, comfort or simplicity in the use of that system decreases potentially and effectively.

Why do I say this is a thing of the past? Because although it still has some validity today, the question to ask ourselves is whether, really, with the “old” approach to security or safe we are really making the systems safer, or if simply what we are doing is complicating even more its "usability" without increasing its security effectively or significantly. On the other hand, it is possible to design systems so that their use become simple and at the same time maintain a reasonable and acceptable degree of safety and security.

We must then, I believe, develop new models of safe use and security of systems that are intrinsically and inseparably linked to our lives. Many of them used daily by all of us without even stopping a minute to think or realise that they represent, to a greater or lesser extent, risks to our privacy or security. This inevitably leads us to the development of the second point: "to be sure".

I also think that while it is true that privacy and security are two different things, in the vast majority of cases and for the vast majority of people, the lack of the former implies deficiencies in the latter.


On the second: "be safe"

To be safe often represents a difficult quest and at this point I will say that even a utopia. The good news, I think, is that while we cannot reach a pure state of safeness or security, we can approach it as much as we want. Of course, the closer we want to be, the more investment it will require in terms of training, awareness, development of procedures and protocols as well as in economic terms. I highlight the concept of investment and the importance of the first two mentioned, because that is where all this becomes more effective and sustainable in the medium and long term.

  • The first thing is to admit it ... we are living in times when we must admit a high degree of coexistence with systems that are part of our lives today and that make us potentially more unsafe than when we did not use them. But at the same time we can no longer stop using them.
  • To Design the systems to be safe from scratch (what I call the NO-by-default principle and the need of not requiring more than that strictly necessary for the purpose of the system).
  • The issue that we historically identify vulnerabilities in the software but not in the hardware and the enormous differences and difficulties in detecting and identifying potential vulnerabilities in the latter compared to the former.

 

 



So who's bad in this movie and what happens to those who are "extras"?

"In the past they were some and today they are others" ... the question is how, being a third party, to be reasonably safe? Being a third party, which also neither designs nor produces the equipment and/or the software that runs on said equipment, it is practically impossible to have all the necessary control over the entire production chain so that we can be completely sure of what happens with all the data and information these systems collect and handle. Then, one way of approaching the desired safeness state is to implement mechanisms that takes action after the mentioned production chain.

As a first approach, it would be possible to analyze the possibilities and resources to carry out testing and homologation of the various devices we use. The main and big problem that this presents is that the number of devices that we use every day, and not to mention the total number of devices that are sold, is in practical terms and on a user scale, simply infinite. That is to say, it would not be sensible to even think about the possibility of approving them all or of having a quality control system in charge of an entity that should review each device sold in a certain market.

What to do, then? One possible course of action could come from the side of the self-classification of the devices. That is, by classifying them within some internationally pre-accredited scale and serving as a guide for users when acquiring a certain device. This is not a simple issue either, because in the first place, we would have to agree at some international organization level, of what that scale would be and what elements should be present for a device to fall within a certain level of classification. Secondly, this scale should then be respected by the manufacturers so that the self-classification serves effectively.

Another important step comes from the hand of education and awareness. This seems undoubtedly to be a path that is both effective and necessary for each user to have at least the basics to classify a particular device and decide whether or not it is in accordance with the satisfactory level of security sought. For this, education and awareness must be carried on at all levels, both educational (all education stages) and public campaigns.

As a higher level step, it can be available, at domestic, business or industrial level, equipment with at least basic and open hardware and software capabilities. If not, we are only transferring the problem to the end user. And that said equipment would be responsible for informing us of the origin and destination of any inbound or outbound data flow and that it should be available for us the means to configure and train the systems ourselves, through simple and clear rules so as to identify and differentiate “suspicious” flows from “normal” flows or at least so that it warns us when something goes out of the expected or specified behavior. This that can be read as complicated, is relatively simple 'cos some of these "firewall like" systems are currently commercially available and accessible to many users both domestically and corporately or organizationally.

The field that I believe has still much room to improve is the graphical interface, configuration mechanisms and capacity building and awareness, so that safe and security mechanisms and devices become really functional and accessible to us all. And of course regulation and public policy, when needed, must encompass all this procedures.