ACCU Autumn 2019 Sessions

"Movie Night"

Algorithm Intuition II

Conor Hoekstra

At C++Now 2019, I gave the first installment of this talk entitled Algorithm Intuition, which went on to win Best Overall Talk of the conference. The talk covered the 11 algorithms in the <numeric> header. This talk will have the same goal, to develop algorithm intuition - but for algorithms in the <algorithm> header.

Better CTAD for C++20

Timur Doumler

Class Template Argument Deduction (CTAD), introduced in C++17, is a useful feature that allows us to write cleaner and shorter code by omitting class template arguments. Unfortunately, the actual language rules behind CTAD are quite complex, and there are many pitfalls. CTAD sometimes deduces unexpected template arguments, and often just doesn’t work at all on certain classes, even if most users would expect that it should.

This talk summarises the actual language rules behind CTAD and its most annoying flaws and pitfalls. We will then talk about how the upcoming C++20 standard will remove some of those flaws, leading to better and safer CTAD that is easier to use.

C++ Insights: How stuff works, Lambdas and more!

Andreas Fertig

With the help of C++ Insights we’ll dive into how things work in C++, through the eyes of the compiler.

For example we’ll walk through how the compiler generates lambdas and explore why you might care. We won’t stop there through! We’ll also look at some apparently simpler cases like implicit conversions and how in-class initializers work.

And of course this is C++ so it would be remiss of us not to also take a peek at one of our favourite C++ features: variadic templates!

Welcome to the compiler’s world with C++ Insights, and at the very least come away from the talk with a whole new way of looking at the code you write!

C++ Modules and Large-Scale Development

John Lakos

Much has been said about how the upcoming module feature in C++ will improve compilation speeds and reduce reliance on the C++ preprocessor. However, program architecture will see the biggest impact.

This talk explains how modules will change how you develop, organize, and deploy your code. We will also cover the stable migration of a large code base to be consumable both as modules and normal headers.

Coarse grained application composition using Pipelines and Buses

Chris Kohlhoff

In the world of low latency finance we are required to build systems that must provide high performance across multiple dimensions, for different deployments, but on the same codebase. We also are required to support very strict failure recovery and availability guarantees to meet ever increasing regulatory demands.

In that environment a pipeline and bus architecture, in the spirit of algorithmic architecture (previously presented at ACCU), can be a highly effective composition technique for building such high performance systems.

This talk will introduce the foundational libraries and techniques that support such a technique, and dive into some of the ways these libraries can leverage modern and future C++ features to solve common architectural problems in performance critical domains.

Enforce Inform Ignore Assume - Gradual Adoption of Contracts In Production Code

Alisdair Meredith

C++23 is adding a contract checking facility directly into the language, but even prior to the language feature, contract checking systems, often build around macros such as BSLS_ASSERT in the Bloomberg open source BDE library, have supported developers describing their interfaces and auditing their code for errors.

In this talk, Alisdair Meredith will present the four fundamental semantics of a contract check that can support rolling out a contract facility retroactively into a live production system. The basic workflow is insert the contracts as rich comments that are Ignored, then turn on some telemetry to Inform you when contracts are violated, while continuing as before. Once there is confidence the system has addressed all known issues (which may take some time!) contracts can be Enforced, terminating the program when a violation is detected. Finally, for performance critical parts of the system, contracts may be Assumed by the optimizer, rather then checked at runtime, once the system is believe to be bug free.

This talk will refer to both the proposed C++23 language support for contracts, and the open source BDE library facility that allows largely the same workflow in a C++03 toolchain, as a practical alternative where the proposed language feature is not yet available for experimentation.

Error is not an error

Jamie Allsop, Chris Kohlhoff

There has been a great deal of interest and activity in the space of error handling in the C++ community over the past few years and the topic of exceptions has always evoked strong views as to applicability and value.

In the language itself we see several efforts to provide alternative error handling strategies and mechanisms that predicate on the idea that errors are a special case of flow control.

This talk deconstructs the concept of an error into its constituent parts, and the shows a model of composition that allows reasoning about errors as events in a stack of state machines. We show that C++ exceptions as they exist today are a conflation of three separate issues that can, and probably should, be separated. We will hopefully demonstrate that this conflation and others that are commonplace in accepted wisdom of errors is one of the primary reasons why effective error handling strategies are hard to define.

To address this challenge we show how errors can instead be reasoned about as events and in most cases errors are, in fact, not errors. To support this we introduce a more helpful vocabulary for considering events as triggers for state transition within, and context switches between, state machines within a system.

We will examine examples from real-world use cases that demonstrate the utility of the vocabulary, and then illustrate how the facilities of C++20 and beyond also fit into this world.

Faster Code Through Parallelism on CPUs and GPUs

David Olsen

Ever since multicore CPUs became widely available, programmers have been working to get compute-intensive code to run in parallel and take advantage of CPU hardware parallelism. This effort has continued in the era of general-purpose programming on GPUs. There are many approaches to parallelizing C++ code on multicore CPUs or GPUs: C++11 threads, OpenMP or OpenACC pragmas, CUDA, and class libraries like Kokkos are among the options. The C++17 standard introduced parallel versions of standard algorithms, offering an approach that is fully portable across C++17 implementations and supports both CPUs and GPUs.

This talk will survey many of these approaches and compare them for ease of use, clarity of the code, and performance. It will include an overview of the current state of implementations of C++17 parallel algorithms in different compilers.

Formatting floating-point numbers

Victor Zverovich

In this talk you will learn more than you ever wanted to know about floating-point formatting, from basics to recent developments in the area. You will find out why printf drags in a multiprecision arithmetic library, what the C++17 std::to_chars all about, why it is so difficult to implement, and how to do it efficiently. You will also learn how the popular {fmt} library and the forthcoming C++20 std::format do floating-point formatting, and how it can benefit your code.

By the end of the talk you will be able to convert binary floating-point to decimal in your mind or you will get your money back!

From Algorithm to Generic, Parallel Code

Dietmar Kühl

This presentation starts with a parallel algorithm as it is described in books and turns it into a generic implementation. Multiple options for running the algorithm concurrently based on different technologies (OpenMP, Threading Building Blocks, C++ standard-only) are explored.

Using parallel algorithms seems like an obvious way to improve the performance of operations. However, to utilize more processsing power often requires additional work to be done and depending on available resources, and the size of the problem, the parallel version may actually take longer than a sequential version. Looking at the actual implementation for an algorithm should clarify some of the tradeoffs.

Showing how a parallel algorithm can be implemented should also demonstrate how such an algorithm can be created when there is no suitable implementation available from the [standard C++] library. As the implementation of a parallel algorithms isn’t trivial it should also become clear that using a readily available implementation is much preferable.

From functions to Concepts: Impact on maintainability and refactoring for higher-level design features

Titus Winters

Higher levels of abstraction are useful for building things out of, but also have a higher cognitive and maintenance cost. That is, it’s a lot easier to refactor a function than it is to change a type, and similarly easier to deal with a single concrete type than a class template, or a Concept, or a meta-Concept…​

In this talk I’ll present example strategies for refactoring the interface of functions, classes, and class templates. I’ll also discuss how the recent addition of Concepts and the proposals for even-more-abstract features affect long-term refactoring in C++. If you’re interested in refactoring, and it isn’t immediately clear that a Concept published in a library can never change, this talk is for you.

Interop Between Kotlin Native and C++ / Swift - the Good, the Bad and the Ugly

Garth Gilmour

This talk assumes familiarity with the basics of Kotlin and focuses on the low level mechanics of interoperability with existing libraries and frameworks. It is intended for Unix and Mac developers interested in adopting Kotlin Native on new projects.

We begin by explaining how the language uses reference counting and a standard memory model to produce non-VM based applications that can be compiled across multiple platforms. We then show a sample library in C and how to use the cinterop tool to generate wrapper types which can be called directly from Kotlin.

We build progressively on this library, enhancing it with the use of pointers (including function pointers and void *), opaque types, callbacks, dynamically allocated memory and other C features. Once the point has been proved in a contrived setting we will demo a Kotlin application that interoperates with mainstream open source libraries to show the practical utility.

Having made the case for C++ interop we will then show a similar case study with Swift and discuss the relative merits of Swift and Kotlin for mobile development. Finally we will discuss performance and ways of creating meaningful metrics to inform your choice of language and platform.

Introduction to Cache Oblivious Algorithms

Mike Shah

There have been a variety of talks recently on the importance of Data Oriented Design. That is, designing data structures optimized for maximizing cache hits, and minimizing cache misses to improve execution time. However software that runs code on multiple platforms, all with potentially different cache hierarchies, may make developing cache-aware algorithms difficult.

In this talk, we will introduce and develop from scratch a cache oblivious algorithm to demonstrate what they are. The audience will leave this talk with knowledge of how to develop and use fundamental data structures that have been designed to be 'cache oblivious'.

Make your programs more reliable with Fuzzing

Marshall Clow

Every day, you read about another security hole found in some widely-used piece of software. Browsers, media players, support libraries - the list goes on and on. You probably use some of those every day. In this talk, I’ll talk about one technique, called "Fuzzing", which you can use to make your programs more reliable when dealing with data "from the outside".

I’ll talk about the general idea of Fuzzing, why it is useful, a brief history, the current state of the art, and some existing tools/libraries/services to help you harden your program. I’ll also have some examples from libc++ and Boost.

No more secrets? Why your secrets aren’t safe and what you can do about it

Neil Horlock

Public key cryptography is ubiquitous, it secures our online lives, identifying and establishing trust with others and underpinning the payments we make. It ties together you blockchains and makes sure you cannot unravel them. In short, the modern world runs on public key infrastructures, so what if they were to break?

Quantum computing is real and present and growing in power. This talk will look at quantum computing and the threat to present day security and identification. We’ll look at why this is no longer tomorrow’s problem and look at reasons why you really should be changing your approach to encryption sooner rather than later

Finally, we’ll have a look at solutions from using quantum science itself to secure your communications to post-quantum cryptography techniques that you can employ today with some practical demonstrations.

Quantifying Accidental Complexity: An Empirical Look at Teaching and Using C++

Serverless Containers with KEDA

Mark Allan

With the growing ubiquity of containers and the surge of interest in serverless and hyperscale solutions, it was only natural that the next step would be serverless containers. Learn how to build the best of both worlds with Kubernetes and KEDA.

To Be Confirmed

TBC

TBC

The C++20 Synchronization Library

Bryce Adelstein Lelbach

In the decade since C++11 shipped, the hardware landscape has changed drastically. 10 years ago, we were still in the early stages of the concurrent processing revolution; 2 to 4 hardware threads were common and more than 10 was "many". Our tolerance for synchronization latency was greater; we were willing to pay microseconds and milliseconds.

Today, dozens and hundreds of threads are common, and "many" means hundreds of thousands. Concurrent applications are plagued by contention challenges that were unimaginable a decade ago. With the traditional tools we have today, programmers often have to choose between unacceptable contention and unacceptable high latency when synchronizing between threads.

The C++20 synchronization library brings solutions - new lightweight synchronization primitives that can efficiently marshall hundreds of thousands of threads:

  • std::atomic::wait/std::atomic::notify_*: Efficient atomic waiting.

  • std::atomic_ref: Atomic operations on non-std::atomic objects.

  • std::counting_semaphore: Lightweight access coordination.

  • std::latch and std::barrier: Marshalling groups of threads.

In this example-oriented talk, you’ll learn how and when to use these new tools to build scalable, modern C++ applications that can run in parallel on virtual any hardware, from embedded controllers to server CPUs to modern GPUs.

The Journey to Heterogeneous Programming

The Many Variants of std::variant

Nevin ":-)" Liber

There was (and still is) much controversy around the discriminated union variant type included in C++17. This talk is a first hand account of the history and process, as well as the details of the various design deliberations and tradeoffs that were made to achieve consensus. It will get into both the performance and usability considerations that were debated, as well as some speculation as to where the C++ committee might like to take it in the future (pattern matching, language-based variant, and so on), including any progress made at the Belfast C++ Standards meeting (taking place the week before this ACCU conference).

If you’d like to learn more about std::variant, discriminated union variant types in general, or gain insight into what it takes to bring a feature through the standardisation process, then this talk is for you!

The Secret Life of Numbers

John McFarlane

They say that 'data expands to fill the space available for storage'. That goes double for numbers — of which there are many. This talk will present my view of numeric types as spans over an infinite range of digits. I’ll use this perspective to explore some popular topics in C++ such as compile-time evaluation, fixed-point arithmetic and undefined behavior.

While the audience is distracted by ones and zeros, I will flash up subliminal messages compelling them to use my Compositional Numeric Library!

The Truth of a Procedure

Lisa Lippincott

One way of modeling a procedure mathematically is to treat it as a statement about the ways in which events can be arranged by a computer. This conception brings programming into the domain of mathematical logic, the study of truth and proof in formal languages.

In this session, I will explain how to read a procedure and its interface as a sentence, how that sentence may be true or false, possible or impossible, necessary or provable.

This presentation of programming from a logician’s perspective is intended to complement the topologist’s perspective of my previous work, "The Shape of a Program", but is independent of the material covered there.