Leading Engineers: Sponsorship

I have seen significant positive outcomes for both organizational growth and team culture from sponsoring engineers. Sponsorship involves championing engineers by supporting their contributions, enabling them to cross boundaries within teams and the wider organization.

Benefits of Sponsoring

Sponsoring is an investment in the future of the organization, ensuring a trust circle of capable engineers who are equipped to meet new challenges. Such initiative translates into:

  • Mentorship and Knowledge Sharing: By allocating managerial support for sponsorship, engineers have time resources to mentor juniors and to share best practices and insights that improve the team’s collective expertise.
  • Talent Attraction and Retention: Demonstrating a commitment to engineers’ development and career progression enhances the organization’s ability to attract and retain top talent.
  • Employee Engagement and Job Satisfaction: Providing strategic contribution opportunities fosters a sense of purpose among engineers. This means higher levels of employee engagement, job satisfaction, and commitment to the organization.

Strategies for Effective Sponsoring

There are a number of ways to implement a sponsorship program. One approach is to create a formal sponsorship relationship through a program. These programs can involve a structured process with regular meetings and goals. A structured approach will ensure the engineers will get some baseline of support to pursue bigger initiatives. Another approach is supporting a relationship where a more experienced individual (senior engineer, manager, principal engineer, director etc.) provides support and advice to help the engineer navigate the complexities of the company and of cross-company projects.

Managers can further support engineers by:

  • Creating a Supportive Environment: Generate oxygen for engineers to share their knowledge or mentor other engineers.
  • Offering Leadership Opportunities: Empower engineers with leadership roles, project management responsibilities, prototype exploration, and infrastructure work.
  • Encouraging Continuous Learning: Offer engineers access to training programs, conferences, workshops and other learning opportunities to stay current with industry trends.
  • Providing Resources and Recognition: Ensure engineers have the necessary resources and tools. Recognize and celebrate their achievements, contributions, and commitment to the organization.

Sponsorship vs Mentorship

Sponsorship goes beyond mentorship by actively advocating for and supporting an individual’s career. Sponsors leverage their influence, resources, and connections to create opportunities, open doors, and ensure that the engineers are seen, heard, and recognized. Sponsorship is similar to having a champion who is invested in your success and willing to go the extra mile to help you reach your full potential.

The key difference lies in the level of involvement and the ultimate goal.

Being an Effective Sponsor

If you are in a position to sponsor someone, there are several things you can do to be an effective sponsor:

  • Identify High-Potential Individuals: Look for individuals who demonstrate talent, dedication and a strong work ethic.
  • Cultivate a Meaningful Relationship: Get to know each other on a personal level to understand their career goals and aspirations better.
  • Actively Advocate: Speak up in meetings, recommend them for promotions and opportunities, and actively provide feedback.
  • Provide Resources: Connect them with relevant people, resources, and training programs.
  • Celebrate their Successes: Acknowledge and celebrate achievements and milestones.

Measuring the success of sponsorship can be challenging due to its intangible nature. However, progress can be gauged through setting specific milestones for career development and project leadership, using regular feedback sessions, looking for increased visibility in larger organizational contacts, and observing growth in capabilities, confidence, and broader influence.

Conclusion

Mentorship and sponsorship work together to create a comprehensive framework for career development. With mentorship providing support and sponsorship propelling careers forward.

Sponsorship can have a trasnfromative impact on engineers’ lives and careers, helping them achieve their full potential and contribute the most to their organizations. This dual approach not only benefits engineers but also organizations. Sponsorship cultivates a culture of excellence, positive competition, loyalty, innovation, and propells the organization forward.

Resilient Security and Supply Chain Attacks

Working in software engineering, I have grown increasingly aware of a pervasive issue in our modern programming landscape: the risk of supply chain attacks. To explore the topic, I have decided to build an experimental methodology and a prototype implementation. Constrained Supply Chain Vetting (CSCV) is a method for identifying and mitigating supply chain threats, focusing on accommodating the needs of different business units. At the center of this method lies its ability to prioritize security metrics per the specific requirements of each organizational division. The implementation of the “Pipe Lock” system exemplifies this method.

I envisioned Pipe Lock as a multifaceted security framework, harnessing static, metadata, and dynamic analysis techniques to safeguard the software ecosystems. The primary innovation is the CSCV methodology. Imagine running an organization with diverse units, each with unique priorities and security needs. CSCV works like an adaptable security shield that accommodates these individual requirements, adding a tailored touch to an organization’s security framework.

While conventional security methodologies like Software Composition Analysis (SCA), Static Application Security Testing (SAST), and Dynamic Application Security Testing (DAST) provide valuable insights, CSCV goes one step further. It identifies open-source vulnerabilities and customizes its configuration based on business intelligence. Additionally, Pipe Lock extends its reach to both proprietary and open source code, creating a wider security blanket.

So, how does Pipe Lock work? The static analysis component is like a watchdog, sniffing out known threats in packages, such as cross-site scripting and SQL injection. Then, there’s the dynamic analysis component, which examines packages’ behavior during execution, like a detective observing a suspect. The metadata analysis component provides the context, shedding light on the package’s origin and authors. Last but not least, we have third-party feedback, which essentially crowdsources intelligence, reinforcing the power of our detection mechanisms.

Still, I am aware that no solution is perfect. The Pipe Lock implementation faces the daunting task of detecting ever-evolving supply chain attacks. It also shows limited support for package managers other than RubyGems and programming languages, but I see this as an opportunity for future expansion. There are also ethical implications, such as ensuring the protective measures do not create an intrusive monitoring culture.

The Pipe Lock system is my attempt to provide an additional layer of security in an era where software supply chain attacks are on the rise. As with any experimental project, challenges are part of the journey, but I am eager to learn from them and continuously improve this system. Ultimately, the goal is to create a resilient software ecosystem where creativity thrives without fear of supply chain attacks. Combining static, metadata, dynamic analysis, and other input sources, such as third-party tools, the CSCV methodology identifies and ranks software supply chain risks while accommodating diverse business needs.

Building on related work, such as OWASP Dependency-Check, and Grafeas, Pipe Lock introduces novel features for a customizable solution. Using CUE, an open-source constraint language and API-driven architecture enable integration with many IT systems and real-time adjustments. A possible extension is integrating machine learning techniques, large language models, and natural language processing. Recent lab results and research support the potential of large language models (LLMs) in the context of code security [Large Language Models for Code: Security Hardening and Adversarial Testing]. For instance, the research presents an approach that allows controlled code generation based on a given boolean property, steering program generation toward secure or vulnerable code. By leveraging similar approaches, the Pipe Lock system could analyze code comments and documentation using large language models, revealing hidden discrepancies between the code and its description.

A Deep Dive into Linux with man7 Expert Training

Over the past few months, I had the privilege to immerse myself in Linux expert training. Guided by the expertise of Michael Kerrisk, the author of The Linux Programming Interface, I coded and philosophized my way through the first principles of the Linux world.

The first training involved the operating system architecture and low-level interfaces needed to build Linux-based systems. The five-day intensive training was an excellent opportunity to dive deep into the power of epoll, signals, Linux APIs, and multiple practical use cases, such as implementing non-blocking servers with an efficient thread count.

The second training was a journey into the depths of low-level Linux privileged applications and containers, virtualization, and sandboxing. By the end of this training, I could review Docker’s and Podman’s architecture decisions with detailed arguments (that daemon!). This intensive, four-day course was a real eye-opener. Before it, and as an example, I thought I had a low-level understanding of Linux capabilities or the UTS namespace. However, Michael’s training offered many new insights into the principles behind these features. One of the course’s highlights was building containers from scratch after understanding the workings of namespaces, cgroups, seccomp, and more.

Moreover, students get access to highly technical material and practical exercises, primarily written in C and Go but all languages that implement the relative operating system interfaces can be used. The labs were a hands-on experience, allowing me to apply the knowledge from the training and the book The Linux Programming Interface to real-world instances (spoiler: buffering is a key concept).

Completing the material on a native Linux installation or a VM solution that allows kernel settings adjustments is highly recommended but optional. I used Ubuntu 23, and VirtualBox served me well, especially for the cgroup sections.

I can’t thank Michael Kerrisk enough for this exceptional training. His book, The Linux Programming Interface, and his guidance have significantly improved my understanding and skills in Linux. I highly recommend the trainings for anyone interested in security, containers, systems programming, or any combination of these (https://man7.org/training/).

Optimise for decisions

When I was reading the observations on the challenger shuttle disaster https://science.ksc.nasa.gov/shuttle/missions/51-l/docs/rogers-commission/Appendix-F.txt, I begun wondering about the value of consensus when we are challenged to solve a wicked and novel problem. The shuttle was built using data and there was consensus on how and what to build, or at least enough consensus to create agreements on which processes to follow.

I am quoting the conclusions of the observations here:

If a reasonable launch schedule is to be maintained, engineering often cannot be done fast enough to keep up with the expectations of originally conservative certification criteria designed to guarantee a very safe vehicle. In these situations, subtly, and often with apparently logical arguments, the criteria are altered so that flights may still be certified in time. They therefore fly in a relatively unsafe condition, with a chance of failure of the order of a percent (it is difficult to be more accurate). Official management, on the other hand, claims to believe the probability of failure is a thousand times less. One reason for this may be an attempt to assure the government of NASA perfection and success in order to ensure the supply of funds. The other may be that they sincerely believed it to be true, demonstrating an almost incredible lack of communication between themselves and their working engineers. In any event this has had very unfortunate consequences, the most serious of which is to encourage ordinary citizens to fly in such a dangerous machine, as if it had attained the safety of an ordinary airliner. The astronauts, like test pilots, should know their risks, and we honor them for their courage. Who can doubt that McAuliffe was equally a person of great courage, who was closer to an awareness of the true risk than NASA management would have us believe? Let us make recommendations to ensure that NASA officials deal in a world of reality in understanding technological weaknesses and imperfections well enough to be actively trying to eliminate them. They must live in reality in comparing the costs and utility of the Shuttle to other methods of entering space. And they must be realistic in making contracts, in estimating costs, and the difficulty of the projects. Only realistic flight schedules should be proposed, schedules that have a reasonable chance of being met. If in this way the government would not support them, then so be it. NASA owes it to the citizens from whom it asks support to be frank, honest, and informative, so that these citizens can make the wisest decisions for the use of their limited resources. For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

This reminds me of software engineering. We create agreements and processes which we follow and then we feel secure with our work if the checklist is complete. If the checklist is completed, then the system must be reliable, robust, maintainable and safe.

I disagree that consensus, checklists and data (I will name these “the agreements toolkit”) are enough for building complex reliable and fault tolerant systems. Such systems include wicked problems that require black box thinking. Definitely, the agreements toolkit is valuable as for example it enables teams to iterate fast and solve most problems fast and efficiently. But, when the agreement toolkit is all we have and trust for solving wicked problems then it leads to consensus optimisation.

What if instead we chose to optimise for decisions? Based on these observations and as an experiment I will start having post-mortems (which I will call pre-mortems) not only when things go wrong but also after successful projects and releases. Behind every success there is luck and things that were never optimised. With pre-mortems I want to find out what was the role of luck in the project’s success and what issues remained unsolved.