2 DAY CONFERENCE

YOW! Lambda Jam 2021

Wednesday, 5th - Thursday, 6th May, Online Conference

19 experts spoke.
Overview

YOW! Lambda Jam returns 5 & 6 May 2021.

YOW! Lambda Jam is an opportunity for applied functional software developers working in languages such as Scala, Elixir, Erlang, and Haskell to enhance their software development skills using the principles, practices, and tools of functional programming.

YOW! Lambda Jam has been the go‑to conference for functional programmers in Australia since 2013. Now in 2021 you'll have the chance to learn and share with functional programmers from around the globe by joining us online at our two-day conference, featuring invited international and Australian experts — including keynotes from Simon Peyton Jones and José Valim.


Eugenia Cheng on stage

This year’s YOW! Lambda Jam will take place online.

Talks will be delivered live and followed by an asynchronous Q&A via chat.

We’ll schedule ample breaks so you have time to grab a coffee before networking and interacting with speakers and other attendees, and we are investigating safe and practical ways for attendees to interact face-to-face as part of the conference.

At Skills Matter, we’ve chosen to see the events of the past year as a challenge to make our content and community more inclusive and accessible to all. Beyond the COVID‑19 pandemic, we have a vision of a community where knowledge sharing and skills transfer are not limited by physical barriers.

We are excited about the opportunity to truly welcome the functional programming community to this year’s Lambda Jam, no matter where you are in the world. We hope to see you there!


Explore YOW! Lambda Jam 2021

Get involved, plan your conference, or start your learning today


Ed Kmett on stage

View the programme

Our two-day online conference features expert-led talks including keynotes from Simon Peyton Jones and José Valim.

Speakers
Attendees in a Lambda Jam workshop working on their laptops

Revisit 2020

View (or review) the talks from Lambda Jam 2020.

visit the library

Excited? Share it!

Programme

Unison: a friendly programming language from the future

Unison is a new open source functional programming language based on a simple idea with big implications: every type and definition in the language has a unique cryptographic signature, determined by its content. Instead of a bag of mutable text files, the Unison codebase is a distributed immutable data structure and the signature serves as a global address into this structure. This is the basis for some serious improvements to the developer experience. Unison has no builds, no dependency conflicts in the traditional sense, and it allows for easy dynamic deployment of code in a distributed setting.

Q&A

Question: So if you have a hashed store of evaluations, does this mean you have timing attacks where you query a store, looking for cache hits to get a handle on exactly what code someone is running?

Answer: I’ll have to think about that one. My initial reaction is that you’d have to have the kind of in-process access to the node such that you could just dump its memory anyway.


Question: Does Unison have FFI? (This is a joke -- in case you're not fully awake yet. But not completely a joke...)

Answer: We don’t have FFI exposed as a programmer facility, but that’s planned.

Question: So I guess that means there’s currently no interop with Haskell?

Answer: There’s no programmer facility for calling arbitrary Haskell code, no


Question: If everything is a hash, won't there inevitably be collisions? Wouldn't it then be possible to accidentally invoke a function that launches nuclear missiles instead of reading a number from a DB?

Answer: The hashes are 512-bit SHA-3, so hash collisions are unimaginably unlikely. If you hashed a billion Unison definitions per second, you could expect your first collision in about 100 trillion years.

I’m sympathetic to the fact that some applications (like missile silos) won’t want to take that kind of risk (although your silo is more likely to be struck by lightning and a meteor at the same time)


Question: If a node always gets the code exactly when called, is there any story around blue-green style deployments? Is there anything for using nodes running on different platforms (operating system, processor architecture, etc)?

Answer: The node could have the code pre-loaded for sure. There will always be some subset of your codebase that’s on every node right from the start. Can you elaborate on blue/green deployments? I’m not familiar with that. Unison runs on a virtual machine written in Haskell, so it can currently run on any architecture that Haskell can target.

Question: Blue/green is where a version of code is deployed without taking the old down and then a swap to point at the new version is performed (dns or otherwise)

Answer: Oh yes, absolutely. Since everything is first-class, you can imagine for example a process listening on a queue for its own definition.


Question: Does this make Unison documentation Turing complete? Do you write tests for your documentation?

Answer: Yes and yes.


Question: You say everything is referenced by hash rather than name, but you're still using names like myConstant and type Nat - so it seems as though there are names associated with these hashes in some way - how does that work?

Answer: Names are a “development-time” thing. They’re only used for two things:

1.Show something nicer than a hash to the programmer when printing code.

2. Reference hashes using something nicer for the programmer than hashes.

The compiled form has only the hashes, and then the hash has a name (or several) associated with it which we can use to print the code back. When you update a name, UCM interprets that as meaning “move this name to the hash of this definition…and create new versions of its dependencies (scoped to a specific namespace) that reference the new hash, and update their names too”.

So potentially when you update Unison does a lot of code generation and moving names to new hashes


Question: Sorry if this is a silly question, but if unison code is like a database and can be exposed on the web, can one expose an api to make a post to a unison server and then mutate the underlying unison program?

Answer: It’s more like a git immutable database - It’s append only. You can create new programs, but you can’t change them

Question: How do names interact though? Names are not immutable

Answer: They are. When you change a name, you create a new namespace and the previous one gets added to its history.


Question: Do I import my dependencies by hash? Or: How do I "update my dependencies"? E.g. if I use an external function foo, it will reference it's hash. But when the author of foo changes the code, how do I get the new version of the code?

Answer: If that author is you, you can have UCM do the update for you automatically (the update command does this). If the author is not you, the author can ship a patch as part of a “release” of the library. A patch is just a map from old hash to new hash, basically. Currently when you pull a dependency from e.g. Git, any patch named patch in its root gets automatically applied.


Question: Wouldn't it be nice if sensible optimisations / transformations were applied before hashing, so that if you wrote traverse in a slightly different way from someone else, unison can still tell you you just wrote traverse (again)... is this possible now? In the roadmap?

Answer: Possible within limits. In general you run into questions of intensional vs intensional equivalence. This is not on the roadmap.


Question: The name lookup system seems to enable ad-hoc polymorphism per function

Answer: Yes! Function names are “polymorphic” in the sense that the type information is used to look up the code for them. Kind of like virtual method tables, innit


Question: Similar to Haskell, could you annotate a definition (to be included in its hash) with rewrite rules to apply to expressions that reference that definition?

Answer: Certainly possible in future!


Question: How do copyleft licenses interact with the unison "store"? If I write something in the strongest copyleft licence I can find, and you write the same function under WTFPL, what happens? Does licence information change the hash of functions?

Answer: You can in principle compute the “transitive license” of any definition, and all the components are there for spitting out a warning if you’re about to use something that introduces a license you don’t want. But we’re not doing anything (and don’t intend to do anything) to enforce licenses. If you write the same definition as somebody else under a different license, we do basically what Github does

Question: When you compute the transitive license, is "incompatible" a valid answer? eg. CDDL and GPLv3

Answer: You could definitely do that. We’re not currently doing anything interesting with licenses, but once we have a Unison API for the codebase you’ll be able to write stuff like this


Question: The thing I'm most curious about is: what's the plan for type-classes? ..or if not type-classes, how is similar abstraction intended to be achieved?

https://github.com/unisonweb/unison/issues/502

Answer: The plan for type classes is currently to do something very similar to what Scala 3 is doing. Type classes are on the roadmap for a future version, but not something anyone is working on currently to my knowledge.


Question: So in this case pushing one function at a time is the expected workflow? and then test code would be like any other code that lives within the “prod” code? I have a to have a look at how that works… It is such a change of how we play the game currently… It’s a “Software Engineering Platform”… All in one

Answer: You can push as many functions as you like at a time. I often will write a dozen or so definitions and add them once I’m happy. I forgot to show off testing. The normal workflow is to add a watch expression with test> myTest = … .UCM expects your test to have a particular type, and it understands that it’s a test. Otherwise it’s just a Unison definition like any other. The convention I’ve been using is for something called foo to have all the tests for foo under a namespace foo.tests

See here for a video of me adding a test:


Question: I'm interested to know what a large codebase refactoring might look like.

Answer: It’s pretty good! For simple refactorings, UCM can just propagate your changes automatically. For more complicated refactorings that introduce type changes and other conflicts, UCM computes an “edit frontier” for you and guides you through pushing that frontier along your dependency chain. See https://www.unisonweb.org/docs/refactoring/

Question: And also code reviews.

Answer: We are looking to improve this. Currently the workflow is inside the CLI, so it’s a bit janky. https://www.unisonweb.org/docs/codebase-organization/#prs

But ultimately we hope to have something like the code review workflow on e.g. Gitlab

The potential here is enormous, and Unison makes it possible to make something insanely great for code reviews


Question: I was told that "names don't matter". Or maybe they do? I don't remember…

Answer: Since in Unison every expression has a canonical, automatically assigned “name” (the hash), other names are just window dressing, a UI for the programmer.



Rúnar Bjarnason

Co-Founder
Unison Computing


G'Day, Nerves!

What is Nerves? What's the community like? How is it used?? These and more to be answered by this adventure into the world of functional embedded development using Elixir and the BEAM VM to create fault tolerant hardware systems...or blinking LED's...YMMV

Q&A

I show several links in my talk. Here they are in order so you don't have to manually search

* https://hexdocs.pm/nerves/getting-started.html
* https://github.com/jjcarstens/replex
* https://github.com/F5OEO/rpitx
* Erlang! The movie - https://www.youtube.com/watch?v=BXmOlCy0oBM
* https://erlang.org/doc/man/heart.html
* https://github.com/nerves-project/nervesheart
* https://embedded-elixir.com/post/2018-12-10-heart/
* Nerves of Steel - https://www.youtube.com/watch?v=tuY2IxAfe-I
* https://github.com/nerves-project/nervesexamples
* https://elixir-circuits.github.io
* https://github.com/elixir-circuits/circuitsquickstart
* https://github.com/fhunleth/nerveslivebook
* https://github.com/drizzle-org/drizzle
* https://github.com/nerves-keyboard
* https://farm.bot
* https://smartrent.com
* #nerves channel in Elixir slack - https://elixir-lang.slack.com/archives/C0AB4A879
* Nerves Forum - https://elixirforum.com/c/nerves-forum/74
* https://nerves.group
* https://github.com/nerves-project

Question: Would Nerves suit hard realtime systems?

Answer: Well, this might be a loaded question of personal preference. I think you could definitely start with it and get very far, but it depends on how strictly you need “hard” real time as Nerves is prob closer to being soft real time.

Question: I think Erlang itself is meant for soft realtime systems

Answer: Yes. The GRiSP project is another embedded erlang setup targeted for “bare-metal erlang” and hard real-time event handling which might be worth a look as well:

https://www.grisp.org

The difference being GRiSP is custom designed hardware and erlang VM vs Nerves which relies on Linux kernel + buildroot. But at the end of the day, Erlang is more for the soft real time

O! And, what we do for this requirement is typically do hard real time on a microcontroller that interfaces with Nerves, and then Nerves does the control and command in Elixir


Question: I'm a realtime systems noob. What's "hard" realtime? Very small slice guarantee? Very low probability of missing a slice?

Answer: It's all about guarantees down to the processor deadlines. It basically means you cannot miss any computational deadline when chomping the bits which gets problematic on processor load, because timings may fail where “soft real time” can handle those missed deadlines with reschedules and eventually hit the mark. But we’re talking like imperceptible deadlines that only become noticeably missed accrued over time.


Question: Has there been any work with Nerves and network booting?

Answer: I think you might need to elaborate your question for me? Just to boot an image remotely on the network? Or maybe..whats your direct goal here?

Question: Right to boot an image from the network instead of using an sd card. I was thinking for the ability to have more than 1 fallback and less upstream internet usage if a firmware can be downloaded once and then shared on a local network.

Answer: There is not work I know of for network booting. for rpi, there has been work to utilize initramfs and some other intermediate files, but thats an interesting idea to be able to netboot. Though the main use-case I can think of would be to obtain the first firmware, such as provisioning new hardware.

I’m going to have to look at this more. There is also the option to use Erlang to connect local devices in distribution and pass the new firmware there instead

We have a lot of devices on cellular and reducing network traffic is a huge deal. The route we’ve been going is for delta-updates and only sending the change bits of the firmware to the device. Thats currently supported be default in the official systems and https://nerves-hub.org supports delivering those firmware updates.


Question: I'm going to have to get into this. I'm also a qualified security technician (alarms/ CCTV/ access control/ home automation). Are there implementations on commercial grade hardware?

Answer: What you’d prob want to look at are what “systems” have been ported so far. We develop a lot of in house hardware for this using Octavo and iMX6uLL chipsets which I see used a lot more https://hex.pm/packages?search=nervessystem&sort=recentdownloads

And there are other places using it commercially, like https://farm.bot . I believe there is also a startup to use it with grid system monitoring

Basically, you would see what processor the target hardware is using, see if there is a nervessystem* port for it, or port yourself

O! And we also use it with access control systems with the iMX6uLL chip, but has been a pretty custom setup



Jon Carstens

Core Team
Nerves


In The Belly Of The Whale: Tales From Haskell In The Enterprise

What is it like to bring Haskell to a large enterprise with an established culture? There are success stories for specialized teams, but what about the mainstream? We've been working in the trenches alongside enterprise developers doing Haskell for nearly 3 years and have lots to report. Ranging from the Haskell language and the ecosystem around it to how it has impacted the systems and people working with it, we will examine the journey from an introduction that took us into stormy seas and the vision of clear skies ahead.

Q&A

Question: I'm lucky enough to work for a business that "gets it", but the two main concerns I hear from not-yet-FP business are:

- How do you convince the business to run that first experiment?

- You mentioned that recruitment was hard for enterprise haskellers. The anecdotes I've heard is that you tend to get FPers beating down your door if you post a job ad. But it sounds like that's not your experience: is it hiring enterprise hasekllers that's hard, or is it just haskellers? Is it hard for the "standard" recruitment agencies (used by enterprises) to fill these needs?

Answer: As far as convincing the business, we succeeded by first getting the leadership to trust us as software engineers first. Then when we came and recommended Haskell they were at least primed to listen.

We also pitched them a project that was not really an experiment, but would end with replacing legacy COBOL systems that the business had been trying to replace for years. So the project value was there.

Definitely cultivating strong, trusting relationships with key leaders was essential


Question: When writing libraries, what's your approach regarding pushing abstractions on the library users? Eg. the library offers something A that can be seen as a Monoid, do you just expose a instance Monoid A to the library user or you expose your own (monomorphic) set of functions operating on A ? what if it was an Applicative ? or a Comonad ?

Answer: Great question. For commonly used things like Monoid and Applicative we will just go ahead and expose instances as long as that's the most ergonomic answer. For more exotic things (e.g. Comonad or free appliactives) we will try to wrap them up and hide them.

Question: Thanks David. Clearly there's a spectrum of abstractions going from well-known to obscure, and this depends heavily on the developer background.

Answer: That is a great point. And if we can expose instances of things, that is generally more acceptable than creating abstractions that require users to use something. Saying "here is an instance of Profunctor for our X" is different than saying "callers need to use/understand Profunctor", just as an example.

Question: My favourite example is a graph parametrised on the vertex and edge types, Graph v e . Should I expose mapNodes :: (a -> b) -> Graph a e -> Graph b e and mapEdges :: (a -> b) -> Graph v a -> Graph v bor just say it's a bifunctor? anyone not familiar with bifunctors would balk at the abstraction feeling it's unnecessary. Maybe we can to both and also expose mapNodes and mapEdges in terms of fst and snd? And of course bifunctor is just an example, insert your favourite type class.

Answer: That's a great example. Having mapNodes and mapEdges explicitly defined and present in the docs is a huge help to newcomers. It's hard for them to hunt down how do so something if it's only exposed as a typeclass instance they're not familiar with. That doesn't stop you from also providing whatever typeclass instances are possible though.

I usually try to just write the functions I want for data types, and then package them up as typeclasses instances afterwards with instances such as

instance Applicative Foo where
 (<*>) = applyFoo

Question: yes, perhaps providing both would be the best option (and perhaps it would be more practical to do fst = mapNodes rather than vice-versa). As @Don Syme was mentioning yesterday, discoverability is an important part.

Answer: Yes -- discoverability is a great way to think about it.


Question: How much has availability of libraries been an issue with things you needed to do in the Enterprise?

You can always write it yourself but often saying you need to write lib X before delivering feature Y is a hard sell.

Answer: It's definitely come up, but hasn't been a significant issue so far. There are libraries for most everyday tasks we want to do. Occasionally some of those libraries aren't ideal for newcomers though.

We've managed to walk a fine line with this. For a great number of things we are able to use libraries that already exist, but we have also written things to make our lives easier where applicable.


Question: I'm curious to know what patterns you use in your enterprise codebases that you've found work well to keep Haskell approachable. What effect patterns do you use? e.g. the ReaderT pattern, mtl-style, etc. Do you use lenses for deeply nested data types?

Answer: So far we've found our lens code to be less maintainable than q the equivalent non lens-based code, so we actually try to avoid/remove lenses at this point.

Our monad stacks tend to be newtypes around ReaderT (and sometimes a library's transformer when required), and then providing typeclass instances for the monad type as required. So very much like MTL, but with a limited number of transformer layers.

We definitely try to keep to a single form of error handling in any given monad.

Keeping smaller pure functions definitely helps. As is favoring more explicit style rather than having lots of typeclasses. Reducing things that are happening "magically" - like template haskell/quasiquoters


Question: I'd be interested to know why David and Trevis found hiring hard, then: was it finding enterprise-friendly Haskellers, finding people who would be willing to pick up Haskell along with their other duties, teaching their recruiters how to reach the Haskell communities, or something else entirely?

Answer: It's definitely helping the enterprise to learn how to find and hire the right Haskell developers. Enterprise hiring carries a lot more with it than a small business, and the enterprise might have to make compromises, like not requiring relocation as part of hiring.

Question: So it's a bit of "helping the enterprise become comfortable with things like relocation". That makes sense. And I suppose "rightness" means hiring people who aren't immediately going to apply the ideas from the latest conference preprint to a core business problem. What else do you look for?

Answer: Yeah, that's exactly right. We would really like to see the enterprise hire good software engineers who happen to know (or be interested in) Haskell, rather than someone who knows Haskell really well, but not how to apply it as part of a team.

Being good team members with empathy for those working with you and that not everyone is going to be interested in very advanced things is highly important.


Question: Do you find it easier to teach advanced Haskellers how to choose their tools carefully, or train people who "get" enterprise development into the ways of Haskell?

Answer: The real key is how much empathy the person has. If you're an advanced Haskeller but you can understand that most of those around you won't be then that's an easy sell. Similarly if you are an experienced enterprise developer but empathetic to the problems Haskell helps solve, that can also be an easy sell.

To be honest, we've had both successes and challenges with advanced Haskellers, so it really depends on judging the person. It can be hard to judge via interviews too, so that contributes to hiring challenges as well.


Question: Follow up from the library question: what big gaps in our library ecosystem have you encountered in the projects you've worked on?

Answer: The biggest gap we encounter is more around usability than functionality. For the most part we can usually find a library that does what we need, but whether or not that library is approachable to newcomers or not is another question.

Some of the biggest gaps are from having libraries that allow the users to stick to more "Simple Haskell" that remains accessible for those that are coming into the language

Question: How much do you think the "usability gap" can be solved via documentation / examples / tutorials?

Answer: Certainly better documentation will help, but if the library requires you to turn on TypeOperators, DataKinds and the like just to use it, we've observed that it impacts usability in ways that go beyond examples and tutorials. Now error messages are harder to follow and the code becomes harder to maintain.

Similarly, some libraries require you to use lens, which is basically asking a newcomer to learn a new sub-language of Haskell

Also being very operator heavy without having a named function backing it can be quite harmful for newcomers.

I really like what Purescript did there in requiring a named function for the operator to be an alias.



David Vollbracht

Partner
Flipstone Technology Partners, Inc.


Trevis Elser

Software Engineer
Flipstone Technology Partners, Inc.


A tale of Nix and Nickel

What do package management and functional programming have in common? More than it seems!

In this talk, I will introduce the Nix package manager, which applies principles from functional programming to overcome fundamental challenges of package management. It crucially relies on a functional programming language, also called Nix, of which I will discuss the current state and shortcomings. I will finish presenting our new configuration language Nickel which attempts to solve some of them.

Q&A

Question: What is the type inference like in Nickel, compared to Dhall? In particular, maintaining type annotations for [] or `None`s in Dhall is quite frustrating

Answer: It's an ML-like type inference, so like "boring" (that is, without fancy types) Haskell or OCaml. The difference is that it doesn't generalize for you: if you need a polymorphic type, you have to write it by yourself. But the type system is less powerful than Dhall, and the upside is that you don't have to annotate empty lists for example. Nickel doesn't have sum types yet though, because it is surprisingly non trivial to design well with the gradual typing and contracts part. Well I forgot one important thing, Nickel has row types (or structural typing for records), and type inference for it, which can be helpful when handling JSON-like data.


Question: What is your opinion on the Guix project?

Answer: I've not used Guix extensively, but it is also a good solution. It is based on the same underlying principles as Nix, and solves more or less the same problems. It is younger (2013 vs 2006 for Nix) so it made different choices: for example, it uses Scheme as a language to describe packages (which is better in some sense than current Nix, because the language does the same but is more widespread, comes with mature tooling, and so on, but it also doesn't solve the typing/abstraction problem).

All in all, with the privilege of hindsight, some parts are maybe more consistent than Nix. On the other hand, it is less mature: the package set and the community are smaller, the resources are even scarcer than for Nix, and so on. Maybe try both and see what fits you.

if you're interested about trustability, check out Trustix: https://www.tweag.io/blog/2020-12-16-trustix-announcement/


Question: Is the Nix project interested in adopting Nickel in Nixpkgs any time soon?

Answer: Well, we first have to get this first release out and convince people that it does solve part of problems. Changing the language is not a small change. But I hope it will happen at some point

Question: or adding similar gradual typing to nix itself?

Answer: To be honest, Nickel is not very far from that. Notwithstanding syntax, it is close to being a super of Nix (although it is currently not exactly the case). The problem is that Nix syntax is kinda idiosyncratic: using : for function parameters would prevent from using it for type annotations, for example. This is not awful but all of these little things add up. Granted, syntax compatibility is still important, because that means you can't just add types to an existing Nix code-base seamlessly.


Question: Is there an alternative approach to nixpkgs and learning to structure nix projects other than reading the source and trying oneself? Like, is there a Gang of Four of Nix book?

Answer: Yes, there is the Nix Pills series indeed: https://nixos.org/guides/nix-pills/. But beginner friendly documentation and tutorials are clearly missing, that's a big issue for adoption unfortunately, that we try to work on.


The links from the last slide:

Nickel repo: https://github.com/tweag/nickel/

Our blog: https://www.tweag.io/blog/

My mail: yann.hamdaoui@tweag.io



Yann Hamdaoui

Software Engineer
Tweag I/O


Elixir for UAV Avionics

Fault tolerant, concurrent, distributed and functional with superior binary wrangling capabilities and network/device connectivity - the Erlang platform designed to run the world's telephony systems also turns out to be a perfect fit for autonomous aerial vehicle control software.

In this presentation Robin will give a brief introduction to  the world of autonomous aircraft, autopilots, onboard companion computers, ground stations and communication links. He will demonstrate the use of his open source Elixir MAVLink library to communicate with an Ardupilot autopilot from Elixir code, which in turn will control an  accurate X-Plane simulation of a large UAV he has been working with since 2014.

Q&A

Question: Do you know of a good intro to PID / control systems for current commercial aircraft for a lay person?

Answer: I’ll ask around - the field is called “Control Theory”

Question: How much does latency of comms feed into how you design such a system? eg for a remote UAV

Answer: You set up a flight plan so that you’re not wiggling joysticks for a vehicle you can’t see. Then you’re more acting as an air traffic controller than a pilot.


Question: What’s a low cost option for getting into the hardware? (edited)

Answer: This is the first model I installed Ardupilot on

Hobbyking.com is a great place to start. There is an HKPilot which is equivalent to the earlier ardupilot. https://hobbyking.com/en_us/micro-hkpilot-mega-micro-sized-flight-controller-and-au[…]xQWmgBz-sDzi0MVue1C2xoC9xYQAvD_BwE&gclsrc=aw.ds&___store=en_us

Question: Is Pixhawk still a good thing? Back in the day it was the bees knees

Answer: I’ve never used pixhawk, they are definitely still around. I think they’re part of the UAVCan consortium (alternative to MAVLink) which is europe centric. Remember that the HKPilot will not support current versions of Ardupilot. I know the latest version of Arduplane it will support is 3.4.


Question: have you had much to do with the CanberraUAV folks? Their work on Arduiilot was really fascinating, as was their real time image processing on the UAV.

Answer: I’m on the mailing list, visited them at the ANU field robotics lab and support them on Patreon. We’re lucky to have so much of this going on in Australia.


Question: I take it one can have a play without having any hardware by noodling with simulators?

Answer: yes absolutely - you don’t even need XPlane, there is a simulator for most vehicle types built into the Arduplane test harness. I use XPlane because it can do high fidelity simulations of aircraft you build in PlaneMaker - full scale aircraft designers use it during design and development.

Be prepared to get your hands dirty poking around in the Arduplane code to see how things work. Also be sure to download the correct tag, for the vehicle type you’re trying to drive


Question: How often do you get to fly the real thing? (as opposed to simulations)

Answer: Not that often unfortunately - there’s a bit of flying at the end of this video: https://www.youtube.com/watch?v=FzZyfGh81ck&t=283s. I’ve spent more time in the workshop building, programming and simulating. I do not recommend this approach. CanberraUAV seem to be out flying most weekends - the difficulty in Sydney is access to places where you can safely fly a model like this. We did our flying at Hawkesbury Model Air Sports at Vineyard (out near Richmond)


Question: As far as I understand here in Australia you need to keep a UAV within visual line-of-sight. Does it restrict what you can do with your planes? (distance-wise) Are there any exceptions to this rule?

Answer: The Outback Challenge is the only time you can operate beyond line of sight as a hobbyist in Australia (unless you’re Boeing). All your test flights need to be within line of sight, so during development and testing you fold the flight plan up to all happen near launch. Don’t discount ardurover or ardusub as options.

One thing I didn’t mention is that the reason I use Navio is that their version of Raspberrian with the interrupt fix has no problem running the BEAM VM with Elixir alongside Ardupilot :-)

So one core can run the autpilot and you still have more free to run Elixiry goodness.

Also to qualify my statement about using Elixir for the main loop, here is an interesting article about the suitability of Nerves for real time computing: https://www.verypossible.com/insights/is-nerves-a-real-time-embedded-system



Robin Hilliard

Founder and CTO
Rocketboots


Testing smart contracts with QuickCheck

I will talk about some recent work on a random testing framework for smart contracts on the Cardano blockchain--which supports the world's largest "proof of stake" cryptocurrency. In contrast to the Ethereum blockchain, Cardano contracts are written in Haskell, but of course, they still need to be tested. I'll talk about the testing framework we've built, and its most novel aspect: not only must we test that nothing bad ever happens, but also that something good is always possible.

Q&A

Question: Something I’ve always wanted for this sort of testing is, while going through this process of finding the correct test, all the failing tests should automatically ask you: Is this something you expected to fail? If so, I’ll add a unit test to make sure it continues to break

Answer: Yes, saving failed tests can be useful. I generally don't bother if they were quick to find, but shrunk tests that were costly to find can be worth saving. They do tend to bit rot though. Ah, yes, and I do quite often write properties that say "this should fail". That is a useful test of the generation, it says "my testing is good enough to find this error".


Question: is this extension to quickcheck released on hackage?

Answer: This code is in the IOHK plutus repo. It's customized for Plutus so would need some modification to go on Hackage.


Question: Do you have any suggestions for helping others build up the skills in determining what properties are useful? They can often seem like a difficult topic for some to wrap their heads around, particularly if they are coming from a heavy unit test background with e.g. Ruby.

Answer: Yes. I published a paper last year called "How to specify it!" which is on exactly this topic... how to come up with properties in the first place. In the paper I talk about pure functions, which are the easier case, but on the other hand that's how come I can discuss 4-5 different approaches in one paper. (Trends in Functional Programming 2019 (which came out in 2020). Also, in the Chalmers research repo. https://research.chalmers.se/en/publication/517894

The ideas apply to code with state also. Then one useful way to evaluate a property is to make it fail. Plant a number of reasonable bugs and make sure they get caught.

Jon Carstens - G'Day, Nerves!

I show several links in my talk. Here they are in order so you don't have to manually search

* https://hexdocs.pm/nerves/getting-started.html
* https://github.com/jjcarstens/replex
* https://github.com/F5OEO/rpitx
* Erlang! The movie - https://www.youtube.com/watch?v=BXmOlCy0oBM
* https://erlang.org/doc/man/heart.html
* https://github.com/nerves-project/nervesheart
* https://embedded-elixir.com/post/2018-12-10-heart/
* Nerves of Steel - https://www.youtube.com/watch?v=tuY2IxAfe-I
* https://github.com/nerves-project/nervesexamples
* https://elixir-circuits.github.io
* https://github.com/elixir-circuits/circuitsquickstart
* https://github.com/fhunleth/nerveslivebook
* https://github.com/drizzle-org/drizzle
* https://github.com/nerves-keyboard
* https://farm.bot
* https://smartrent.com
* #nerves channel in Elixir slack - https://elixir-lang.slack.com/archives/C0AB4A879
* Nerves Forum - https://elixirforum.com/c/nerves-forum/74
* https://nerves.group
* https://github.com/nerves-project

Question: Would Nerves suit hard realtime systems?

Answer: Well, this might be a loaded question of personal preference. I think you could definitely start with it and get very far, but it depends on how strictly you need “hard” real time as Nerves is prob closer to being soft real time.

Question: I think Erlang itself is meant for soft realtime systems

Answer: Yes. The GRiSP project is another embedded erlang setup targeted for “bare-metal erlang” and hard real-time event handling which might be worth a look as well:

https://www.grisp.org

The difference being GRiSP is custom designed hardware and erlang VM vs Nerves which relies on Linux kernel + buildroot. But at the end of the day, Erlang is more for the soft real time

O! And, what we do for this requirement is typically do hard real time on a microcontroller that interfaces with Nerves, and then Nerves does the control and command in Elixir


Question: I'm a realtime systems noob. What's "hard" realtime? Very small slice guarantee? Very low probability of missing a slice?

Answer: It's all about guarantees down to the processor deadlines. It basically means you cannot miss any computational deadline when chomping the bits which gets problematic on processor load, because timings may fail where “soft real time” can handle those missed deadlines with reschedules and eventually hit the mark. But we’re talking like imperceptible deadlines that only become noticeably missed accrued over time.


Question: Has there been any work with Nerves and network booting?

Answer: I think you might need to elaborate your question for me? Just to boot an image remotely on the network? Or maybe..whats your direct goal here?

Question: Right to boot an image from the network instead of using an sd card. I was thinking for the ability to have more than 1 fallback and less upstream internet usage if a firmware can be downloaded once and then shared on a local network.

Answer: There is not work I know of for network booting. for rpi, there has been work to utilize initramfs and some other intermediate files, but thats an interesting idea to be able to netboot. Though the main use-case I can think of would be to obtain the first firmware, such as provisioning new hardware.

I’m going to have to look at this more. There is also the option to use Erlang to connect local devices in distribution and pass the new firmware there instead

We have a lot of devices on cellular and reducing network traffic is a huge deal. The route we’ve been going is for delta-updates and only sending the change bits of the firmware to the device. Thats currently supported be default in the official systems and https://nerves-hub.org supports delivering those firmware updates.


Question: I'm going to have to get into this. I'm also a qualified security technician (alarms/ CCTV/ access control/ home automation). Are there implementations on commercial grade hardware?

Answer: What you’d prob want to look at are what “systems” have been ported so far. We develop a lot of in house hardware for this using Octavo and iMX6uLL chipsets which I see used a lot more https://hex.pm/packages?search=nervessystem&sort=recentdownloads

And there are other places using it commercially, like https://farm.bot . I believe there is also a startup to use it with grid system monitoring

Basically, you would see what processor the target hardware is using, see if there is a nervessystem* port for it, or port yourself

O! And we also use it with access control systems with the iMX6uLL chip, but has been a pretty custom setup



John Hughes

Creator
QuickCheck


A History of Enterprise Monads

 

The early 2010’s were exciting times for Scala. The language & ecosystem started to professionalize, both from a technical (binary compatibility) and a community point of view (many conferences were started). Not too long after Lightbend – then Typesafe – was founded, I registered the typelevel.org domain on a whim and put together a rudimentary website advertising a few FP-minded Scala libraries. Fast forward to today: Typelevel is known for a wealth of functional libraries, beginner-friendly educational resources, a series of conferences and a distinct ecosystem – including a custom compiler – within the Scala community. In this talk, I’d like to examine what got us there and into the mainstream.

Q&A

Question: Cats Effect was brought up as a separate library separately from cats was it? they seem very close together

Answer: I believe (IIRC) that originally the Task abstraction came from the Red Book where they were trying to build a better Future than the stdlib had to offer.

It was integrated into scalaz at some point and used in scalaz-streams, but then the latter library evolved quite a lot, now being fs2. Somewhere in the middle it was decided that it's best to have async abstractions in a dedicated library so that it could serve as a shared kernel for streaming libraries: e.g. both Monix and fs2 support Cats Effect typeclasses


Question: You mentioned a library pureconfig that uses shapeless. Is pureconfig recommended over ci.ris these days?

Answer: Ciris is also a Typelevel library. It has some different design decisions, so it comes down a bit to personal preference also.

I think Ciris has no derivation support, though (From Jakub K: Yep, on purpose - there's no standard way to derive things, like how do you load a string field?)

OTOH it integrates nicely with refined



Lars Hupel

Senior Consultant
INNOQ


Connecting the dots - building and structuring a functional application in Scala

Functional programming relies on building programs from orthogonal, composable blocks. That's likely one of the reasons why full-blown application frameworks haven't gained much traction in the functional ecosystem.

However, we still need to structure our code and wire up our applications in a way that lets us keep them modular, testable and simply pleasant to work with - in this talk, we will learn how to do just that!

In this talk, we will walk through the architecture design and testing setup for a functional app on the Typelevel stack that integrates with several third-party services to process data in a streaming fashion, and expose its results to downstream clients.

Q&A

Question: Does Scala power Disney+?

Answer: I wouldn't be here if it didn't ;)


Question: What is the font used for the code snippets in the slides?

Answer: It's the jetbrains font in vscode, although I don't use ligatures (for some reason Keynote didn't allow me to turn them off)


Question: Personally, I’m not a fan of tagless final style. I prefer the “old way”: passing dependencies to class constructor

Answer: To each their own, to me using Reader to pass dependencies was clunky and boilerplate-y. With meow-mtl, there's even more machinery than with implicits. And there's definitely lots of machinery with ZIO layers, you just don't see it.

I prefer implicits (F or no F) because it's a native language feature, and as such it's specified quite well in the language specification, unlike the details of how ZLayers compose :)

But yeah, you can apply a lot of the same practices with ZIO layers too


Question: Is using https://cir.is/ for config, after PureConfig and ZIO Config why do we need another functional config library?

Answer: zio-config is obviously more friendly to existing zio users, and pureconfig uses configuration files in a different language (HOCON). I'm more of a fan of configuration as code, it has fewer gotchas (classpath ordering doesn't really matter, for one) and more type safety. I'm not sure if zio-config does that as well


Question: I am new to Scala. Is “capability traits” a term you created? Is there any similar terminology being used by the community? Also, the practice of using context bound to pass a dependency that only have one possible instance. Does that pattern has a name?

Answer: I've seen that term used around fs2 and other libraries, so definitely not me

For the former, I think it's just it - using context bounds. Or the tagless final pattern (which is kind of different as it's used in Scala than what the original author of the "tagless final" term meant)

You could call the capability traits "lawless type classes", since that's essentially what they are - some operations defined for a type, without any algebraic laws governing them.

Question: Can you explain why using capability traits is preferred over lawful type classes? Apologies if you already explained this in your talk. You mentioned it in this slide (https://speakerdeck.com/kubukoz/connecting-the-dots-building-and-structuring-a-functional-application-in-scala?slide=36), but I missed the rationale behind it

Answer: Sync and Async are super powerful type classes. The more power a TC has, the fewer types can implement it. If it has fewer operations (or ones that are easier to define for a variety of types), it's going to be more generic that way.

For example, you can implement Async for IO, monad transformers, a free monad... but that's roughly as far as it goes - you can't get Async for Option.

Also, using the more powerful thing is against the principle of least power, so instead of giving your method (like def findUsers[F[_]: Network] ) a couple operations, you could give it the whole world of Async - which is capable of lifting any operation into F, so basically all the power there is in the world.

This goes away the moment you make your F concrete, btw - you have Async built in.

Sync/Async are FFI type classes, as they allow you to do any external world interaction you want, and lift that into an effect. Other type classes like Monad and Functor are insanely less capable (you can only introduce values via pure and operations on the result, or composing effects that you got from other interfaces), and thus very fine to use.

Finally, if you ever decide to make something not be a capability trait, it's going to be easier to provide a fake instance than if you had Async there in the first place - you can't mock Async (well, can but shouldn't, and I really wish you don't), but you could implement a "virtual" ProcessRunner or Network for tests.

Final point, e.g. in fs2 all the File operations require Files[F], and if they required Sync /`Concurrent` instead it'd mean you'd have Sync / Concurrent everywhere too - unless you wrapped it in your own Files.

Anyway, the point is constraining the operations to a subset of what your effect is capable of - to limit your knowledge about what kind of a type it could be, and making assumptions specific to that data type. Later on it can pay off in fake-ability... and it usually helps separate concerns too. Instead of having Async you can split your usage into Files and Network, for example, and that helps when you have more convoluted code.


Resources:

If someone wants to get familiar with the app I'm going to use as the example, feel free to dig in (it's not very well documented yet though, I'll try to make that better over the upcoming days): https://github.com/kubukoz/dropbox-demo

YouTube Channel: https://www.youtube.com/channel/UCBSRCuGz9laxVv0rAnn2O9Q



Jakub Kozłowski

Scala Developer
Disney Streaming Services


Abstract Fun-sense: a functional perspective on life

Everything can be a function if you look at it the right way; we can characterise familiar concepts like sets, lists and even plain values as functions.

Thinking about such basic objects as functions can seem unnecessarily abstract, but it isn’t just an exercise in increasingly intimidating notation! It turns out to be an elegant perspective that allows us to glimpse a more powerful abstraction.

In this talk, we’ll see how playing with this notion leads to the Yoneda Lemma, a key result in category theory. We’ll build up some intuition and motivation for the Yoneda Lemma, and return to the notion of viewing objects as functions to appreciate some of its implications.

Q&A

Question: So naturality is a condition on functions that preserve identity?

Answer: It’s a condition on a polymorphic function, which captures the intuition that it has to act uniformly on all types.

I think of things that preserve identity are like a different class of thing, like functors (which operate on types as well as having map, but the map is the thing here that preserves the identity. There isn’t an equivalent of map for naturality).

Question: Hard because you showed that the identity was the only thing this polymorphic function could do that would work across all types, therefore anything else would not.

Answer: ah yeah, that’s the only thing the polymorphic function can do if it has type T -> T.

Polymorphic/natural functions of more complex type signatures can do other things, like e.g. [T]_n -> T can be the function that takes the first element of a list. Or even [T]_n -> [T]_n could reverse the list. they’re both still acting uniformly on all types.

Also --- yes!!! it’s hard.

Question: Ah ok, so naturality on T->T is super restrictive, but more interesting with more complex types, because we’re identifying operations that are truly polymorphic

Answer: yes, exactly! that’s a really great way of putting it



Dana Ma

Staff Software Engineer
Zendesk


Hashing Modulo Alpha Equivalence

In many applications, one wants to identify identical subtrees of a program syntax tree.  This identification should ideally be robust to alpha-renaming of the program, but no existing technique has been shown to achieve this with good efficiency (better than O(n^2) in expression size). We present a new, asymptotically efficient way to hash modulo alpha-equivalence. A key insight of our method is to use a weak (commutative) hash combiner at exactly one point in the construction, which admits an algorithm with O(n*(log n)^2) time complexity. We prove that the use of the commutative combiner nevertheless yields a strong hash with low collision probability.

Q&A

Question: What effect does this have on GHC compile times?

Answer: I have not tried this in GHC. We are using it at MSR for an optimiser for machine-learning programs


Question: I recall an video once where you jokingly/seriously talked about how you think "Haskell is Useless!". Limited Side effects, but not 'useful' like other languages.

What do you think about that 10 years on?

How do you think passionate everyday FP'ers can drag Haskell/FP to the "Useful" Nirvana?

Answer: It was indeed jokingly. Haskell is now increasingly used in production, I'm happy to say; so much so that we've started the Haskell Foundation https://haskell.foundation/ to support this increasingly mission-critical role.


Question: Is there a measure of how much improvement there is in sharing, given the de bruijin etc. is incorrect? assuming the point here is to find common sub expressions and share references to them, the overall expression size should become smaller. I’m wondering how much better this tends to be at that due to fining all common sub-expressions accurately.

Answer: You mean: for CSE we could put up with false negatives, provided we don't get any false positives. Yes, that's true. Indeed, in GHC, we use an even simpler CSE that has lots of false negatives. But I don't know of any measurements that tell us how much difference that makes in practice. I bet it's not much!

So maybe this talk won't change the world of compiler writers after all. But it's much more satisfying to have an algorithm that finds precisely the right answer, and does so reasonably quickly. My inner geek is happy.

Plus, I suppose, best-efforts algorithms can be fragile to "I made a small change to my program and it runs way slower now... the compiler missed some vital optimisation"



Simon Peyton Jones

Senior Principal Researcher
Microsoft Research, Cambridge

Simon Peyton Jones, MA, MBCS, CEng, graduated from Trinity College Cambridge in 1980. Simon was a key contributor to the design of the now-standard functional language Haskell, and is the lead designer of the widely-used Glasgow Haskell Compiler (GHC). He has written two textbooks about the implementation of functional languages.

After two years in industry, he spent seven years as a lecturer at University College London, and nine years as a professor at Glasgow University before moving to Microsoft Research (Cambridge) in 1998.

His main research interest is in functional programming languages, their implementation, and their application. He has led a succession of research projects focused around the design and implementation of production-quality functional-language systems for both uniprocessors and parallel machines.

More generally, he is interested in language design, rich type systems, software component architectures, compiler technology, code generation, runtime systems, virtual machines, and garbage collection. He is particularly motivated by direct use of principled theory to practical language design and implementation -- that's one reason he loves functional programming so much.


Effective Programming in OCaml

Effect handlers have been gathering momentum as a mechanism for modular programming with user-defined effects. Effect handlers allow for non-local control flow mechanisms such as generators, async/await, lightweight threads and coroutines to be composably expressed. The Multicore OCaml project retrofits effect handlers to the OCaml programming language to serve as a modular basis of concurrent programming. In this talk, I will introduce effect handlers in OCaml, walk through several examples that illustrate their utility, describe the retrofitting challenges and how we overcome them without breaking the existing OCaml code. Our implementation imposes negligible overhead on code that does not use effect handles and is efficient for code that does. Effect handlers are slated to land in OCaml after the addition of parallelism support.

Q&A

Question: What is the efficiency hit you take by using exceptions?

Is it leveraging the existing machinery for exceptions in OCaml?

Answer: Good question. We retain the exception implementation as is. So no performance hit there. Results are there in the Table 1 in https://kcsrk.info/papers/retro-concurrency_pldi_21.pdf


Question: How do you see LWT / Async changing to make use of this work?

Is there space for new libraries to exploit effects in OCaml?

Answer: Thanks to the modularity of effect handlers, we can in fact aim to merge Lwt and Async modulo small breaking changes. In the meantime, we’re building the next gen high performance multithreaded I/O library using effect handlers here: https://github.com/ocaml-multicore/eioio


Question: Is the implementation similar Alexis King's delimited continuations proposal for GHC?

Answer: Good question. yes, it is similar. Alexis’ library builds on top of existing support for threads in the GHC runtime system. We build that support into OCaml.


Question: Can implement the same mechanism in GHC to give us algebraic effects in Haskell? Or is typing the problem?

Answer: Typing is a separate issue. See Alexis King’s eff library which achieves something very similar building on top of GHC existing support for lightweight threads: https://github.com/hasura/eff


Question: Curious what the types look like for the `match f() with` from the sample code. Can you show what the types look like? I assume the type inferencing from OCaml continues to work without annotations

Answer: One major difference from other studies of effect handlers (I’m thinking Eff, Koka, etc), is that the current OCaml implementation does not give you effect safety — No static guarantee that all the effects that you perform are handled. This is only as bad as exceptions.

Type inference continues to work. See Section 4.1 in https://kcsrk.info/papers/retro-concurrency_pldi_21.pdf for the types.

We’ve built prototype OCaml implementation with an effect system. See https://www.janestreet.com/tech-talks/effective-programming/

Question: Was there thoughts to use polymorphic variants to represent the types rather than using exceptions?

Answer: The effect system that Leo proposes in the talk above uses structural typing (similar to polymorphic variants) IIRC. The distinction doesn’t matter that much unless you want to ensure effect safety. It turns out that structural typing makes it easier when using abstraction (something like hiding types using modules).

Question: Need to watch Leo’s talk then, exhaustive pattern matching is such a good property in building software I’m loath to give it up.

Answer: Agreed. The current plan is to only upstream the runtime support for effect handlers (i,e fibers) now, and add the syntactic support with the effect system. This is to ensure that OCaml doesn’t end up with both untyped and typed versions of effects.

The runtime support will be exposed as functions in Obj module. As a library writer, you can build direct style code that takes advantage of the ability to suspend and resume computations. And when the effect system lands, you can change your library to use the effect system, without breaking code for the user of your library.


Question: Is there a way to encode preëmptive concurrency as well?

Answer: Really good question. We do not support preemptive concurrency using handlers now. But we have a good idea of how to do this. The challenge there is to get the asynchronous Yield effect only to appear in the context where there is a handler for Yield .

We’ve written some thoughts about it in Section 4.2 of https://kcsrk.info/papers/system_effects_feb_18.pdf


Question: Is it likely that ocaml will ever get linearity to ensure the usage of those continuations?

Answer: Highly likely, though there isn’t anyone actively hacking on it now. https://arxiv.org/abs/1803.0279

Question: In the paper linked in the concurrency question above, (https://kcsrk.info/papers/system_effects_feb_18.pdf) you note that for an async exceptions, a Break effect would intentionally not continue the continuation.

Answer: IUC, you are talking about Sys.Break . Sys.Break is an exception not an effect. There is no way to resume here. If I am mistaken, please let me know.

Question: Quote from the paper:

“By treating Break as an asynchronous effect, we can mix synchronous and asynchronous cancellation reliably.

...

Asynchronously-cancellable code can be implemented by handling the Break effect and discarding the continuation.”

Answer: Good point. I think “discarding a continuation” was a lazy phrase. Really, it should have said, you should discontinue the continuation with an exception just to run the resource cleanup code. For example,

try
 ignore (discontinue k Unwind)
with Unwind -> ()

This code discontinues the continuation with Unwind exception so that all the resource cleanup gets to run, and ignores the Unwind exceptions.

Also, the TFP paper was return a few years ago before we fully understood what the backwards compatibility expectations should be.



K C Sivaramakrishnan

Research Associate
University of Cambridge


Journey to the Centre of the JVM

What do you do when your quest for power leads you to implementations which are not just platform- but processor- and even architecture-version-specific in nature? How do you even start tracking down a bug in a Scala-based implementation which is not only non-deterministic but only manifests on certain hardware? In this talk, we will dive into the wild and ill-understood world of CPU architecture, memory models, and JVM intrinsics (all through the lens of very high-level purely functional abstractions!) as we examine the story of the most convoluted and mind-bending bug hunt of my entire career.

Q&A

Question: Now I'm wondering how Rosetta on the M1 solves this problem.

Answer: They did something really clever! they basically embedded a version of x86's memory semantics within the M1 silicon! It's a separate CPU mode that Rosetta just… turns on - this is how the M1 can still be really fast even when emulating x86

Also I can confirm that, when running the Cats Effect 3 test suite on the Apple M1 using hte Azul VM, the bug does not manifest. So M1 in general is just very impressive work, they put a lot of effort into making it as easy as they could


Question: To be clear: Java 9 fixes this issue by giving better documentation, but the bug is on both 8 and 9? Or is the bug reproducible on Java 9?

Answer: yes! in fact the bug even manifests on Java 16. that was one of our very first thoughts: maybe it was just java 8 specific! but sadly no.


Question: I encountered a situation where cancelling and then joining a Fiber results in the program hanging:

object Main extends IOApp {
 override def run(args: List[String]): IO[ExitCode] = {
   val io = IO.sleep(2.seconds)
   for {
     _ <- IO(println("Start a fiber"))
     fiber <- io.start
     _ <- IO(println("Cancel the fiber"))
     _ <- fiber.cancel
     _ <- IO(println("Join the fiber"))
     _ <- fiber.join
     _ <- IO(println("Return success")) <---- never gets printed
   } yield ExitCode.Success
 }
}

This program never outputs Return success and runs forever. This is on Cats Effect 2.1.3. Do you have any insight into this?

Answer: Yes so this is a very subtle issue with cats effect 2. The problem is that join in CE2 promises something of type IO[A]. But then… what can it give you if the fiber was canceled? it never got an A. The answer is that it just deadlocks which is… exactly not what you want. In CE3, we changed join to produce an Outcome, which lets it signal back to you when cancelation took place.

Question: Will this change be introduced to CE2 also?

Answer: Unfortunately we can't without breaking binary compatibility. So in general, I would recommend using Deferred explicitly rather than relying on join in CE2, in part for this reason. It gives you more control over what happens.


Question: You mentioned that someone from Cats Effect identified the bug, how did they/you go about diving into it to figure out where the bug was and what tools did you use for it?

Answer: It was a pretty painful process. so the tools used were a unit test suite where we had set the iteration count so that we could reproduce it relatively consistently, sbt, tmux, an EC2 Graviton instance, and a lot of patience.

Raas was the one who did the bulk of the work. Here's the reproduction: https://gist.github.com/RaasAhsan/8e3554a41e07068536425ca0de46c9e8


Question: I have a question which is somewhat related. I would still be interested to know if there are any performance implications of putting values only accessed from within an IO into the same IO or the surrounding scope. I.e.

val a = SomeBigArray(...)
IO {
 doNetworkCall(a)
}
vs
IO {
 val a = SomeBigArray(...)
 doNetworkCall(a)
}

Answer: In general, two IOs will be slower than one. whether or not that matters is kind of an interesting question that depends on what doNetworkCall is doing.

The advantage to pulling it apart into two is 2-fold:

  • You get a cancelation check between each one
  • You get better composability (so you can refactor a little more easily and break it apart)

it's kind of a judgment call as to which you do. I always bias towards smaller IO { ... } blocks until I measure performance costs that make me roll it back, and that's pretty rarely something that has to happen.


the PR review was hilarious

I think we added some comments, so it was more like a +20/-1 PR or something

Ref: https://github.com/typelevel/cats-effect/pull/1416



Daniel Spiewak

Principal Engineer
Disney Streaming Services


What’s new in F# 5.0 and beyond

The F# language delivers practical, enjoyable, and productive programming for the era of the cloud. At the core of F# is succinct, performant functional-first programming, compiling to both .NET and Javascript, with cross-platform, open-source toolchains for those at home in either ecosystem.

In this talk I’ll describe how in F# 5.0 and beyond we are adding more magic right across the F# stack – keeping programming simple and correct yet delivering the features you need for maximum productivity:

  • Added expressivity and performance for DSLs using F# computation expressions
  • High-performance state machines and resumable code for functional DSLs for collections, tasks, asynchronous sequences, and more
  • Improved package management integration in F# scripting
  • Interactive notebooks and a wide range of other tooling improvements
  • F# analyzers, e.g. for additional shape checking in AI tensor programming
  • Turnkey programming stacks for the client, server, and full-stack programming

Join me for this walk through the latest in 2021 for F#

Q&A

Question: What extra type level features would you like to see in F#? Specifically things that you think are considered advanced but would really help solve practical problems.

In my work I’d like to use more GADTs (I don’t think F# has that), and the new work on effect systems/ linear types and more dependent typing.

Answer: This is a good question. F# is bounded partly by interop concerns - I think a little more than some other languages - we suck on a lot of .NET libraries directly, which is great from many perspectives. But if the libraries don't come with sufficient effect information then we don't gain that much by adding effect tracking.

That said, effect systems would work beautifully with F# computation expressions.

So it will be an area we will be keeping an eye on.


Question: What is F#'s ad-hoc overloading mechanism and how does it work?

Answer: There are two.

First, F# resolves object model overloading using nominal type information and type annotations inferred on a left-to-right basis. This means, for example, let f (x: int list) = x.Length resolves . But that doesn't offer generalization.

So F# also has a mechanism called "SRTP" (Statically Resolved Type Parameters) that allows structural constraints based on operations. These are generalized for "inline" code.

For example

let inline double x = x + x

resolves to a generic "double" function usable with any type supporting the "+" operation. This gives the basic kind of entry-level thing often solved with type classes.

However, while SRTP is technically quite powerful (see the FSharpPlus library for example), we make writing SRTP code kind of hard, somewhat on purpose. This is partly because I don't believe the Fucntor/Monoid/Applicative/Bifunctor kinds of category theory characterizations that people build are really hitting the sweetspot in coding simplicity. And also SRTP has a few glitches. And finally we may end up aligning with combined F#/C# proposals to add other variations on type-level programming.


Question: How do computational expressions improve on haskell "do" syntax?

Answer: This is covered quite a lot in "The Early History of F#" paper. https://fsharp.org/history/hopl-final/hopl-fsharp.pdf

Firstly, F# computations expressions are a single feature covering both comprehensions and monads (and other things). So in a sense it plays the role of two features of Haskell.

Next, F# list computation expressions support appending in nested position (with variables in scope). They also support the entire control syntax (try-catch, match, try-finally, nested function definitions and so on). In practice my impression is people write much larger and richer list comprehensions in F# as a result - I'm confident that would be borne out by a study. Entire applications are written where the view is basically one big comprehension.

Next, F# the notation in computation expressions more generally can be extended through "custom operators". This was actually inspired by Haskell-related research by Peyton Jones and Wadler ("Comprehensive Comprehensions").

Finally, F# computation expressions can also support query operations including LINQ-style meta-programming.

Check out The Early History of F# for more details on most of this, it's been reviewed by Haskell folk so should help clarify.


Question: What’s the current thinking around FSharpAsync vs Task - are we likely to see async formally deprecated & replaced by a builtin equivalent of TaskBuilder?

Answer: No, F# async won't be deprecated (we may try to reimplement it in a backward-compatible way on more efficient foundation)

Some of this is covered in the F# Async Programming Model paper.

  1. F# async offers implicit cancellation token passing. This is really much nicer than with tasks
  2. F# async allows safe async tailcalls
    return! someAsync
  3. F# asyncs are composed, then started. There is no implicit start like there is with tasks. Cognitively this is nicer to program with.

That said, I'd expect to see task { .. } gradually become the norm, and likely the first thing taught.

Question: Thanks! I’ve mostly been converting tasks to async, but I’m thinking if you’re predominantly writing application code that just runs tasks from C# dependencies you may be better off using task { } directly.

Answer: Yes, that's a fine approach. Also task performance is better for the async primitives, though in practice none of that normally matters if there's even a single asynchronous I/O (as there usually is)


Question: It's weird to see the error type on the right in result. F# doesn't have partial type parameter application?

Answer: Yes, F# doesn't let you partially apply type parameters.


Question: F#'s approach to simplicity has a lot to offer many disciplines. If you were to increase uptake of F# in one domain which domain would you focus on: C# devs, data scientists, data engineers, web devs.

Answer: Hmmm good question. It would be fairly evenly balanced between all four I think.

More generally, I would say "Python programmers". It's clear that very many people are being sucked into a sub-optimal point with having Python as their one and only programming skill. That's fine up to 500-5000 lines but soon they will need more. The succinctness of F# plays very well with the Python mindset, yet delivering performance and strong typing.


Question: F# is influencing C#, but what efforts are in place to move to using more F# in .NET itself? Is that even a sensible idea?

Answer: In practice, no. For core .NET components the dependency on FSharp.Core alone is enough to make people do the rewrite to C#, even if they prototype in F#.


Question: I have to say, as a Haskeller, fsharp makes me jealous. My brain doesn't work without monads and typeclasses, but fsharp has a very nice and coherent design. I wonder what Haskell can learn from fsharp's design (other than (|>) = flip ($) )

Answer: I think in practice Haskell can learn a lot from F#, yes, and its an under-explored area ripe for some student or hacker language-designer to just let rip on in an iconoclastic kind of way.

Some specific areas that come to mind:

  1. Haskell's do notation could be thrown away in favour of something more like computation expressions. This would include allowing much more imperative looking syntax (try/finally, try/with, for, while) inside the notation, maybe to the point of basically having C or Java or C#-style control construct syntax
  2. Personally I think nominal object programming with implicit object constructors (even without any inheritance) is really nice even for purely functional data. A version of this should be incorporated, plus a monadic version to allow a monadic computation as the implicit constructor
  3. F# integrates subtyping into HM type inference and nominal object programming well enough for practical purposes. The ways it does this won't satisfy FP purists (it's nominal and has cases where it's left-to-right algorithmic) but it's good enough for practical object programming

So basically, to be iconoclastic, someone could experiment with sort of throwing away Haskell as it stands today and make a Haskell* that embraces both richer syntax for monads and also a degree of nominal object programming, while keeping the core expression language.

The problem is, that would split the Haskell community in all sorts of ways, and not only because it's effectively a new language. It might satisfy those who just want to get on with monadic/imperative/async/nominal-object programming in a practical get-stuff-done way, but certainly wouldn't satisfy those who need none or few of those things and indeed have strong dislike for them, and don't want Java control constructs emerging in the middle of their language. (Core OCaml and F# steer a real balance between having control constructs but still basically being functional-first programming). There will also be interactions with laziness I haven't thought of, especially in the object notations.

The Haskell community work hard to maintain its unity and I actually don't see this being addressable without a new language, just like Scala, F#, Swift, Kotlin etc. have all been new languages.



Don Syme

Principal Researcher
Microsoft Research, Cambridge


Phoenix LiveView: Building Scalable Web Single Page Apps Doesn't Have to Hurt

Meet Phoenix LiveView, the Elixir-based programming environment for Phoenix. The author of Programming Phoenix LiveView and LiveView trainer will walk you through how the programming model works. Along the way, you'll see what web development is supposed to feel like. The best thing? You'll be able to build the interactive single-page apps your customers want without compromising on code organization or writing custom JavaScript.

Q&A

Question: It looks like OTP is based on callbacks, is it possible to use functions like observable streams?

Answer : OTP itself is callback-based, yes, but things built on top of it are known to use observable streams. One that comes to mind is Flow which is a bit like Flink but for small data. https://hexdocs.pm/flow/Flow.html


Question: Elm (only) supports a similar architecture and has the “time travelling debugger”. How do you go about debugging applications written in this style in Elixir/Phoenix? Is there a similar tool available?

Answer: Yes, Elm has a very similar architecture. There are lots of debugging choices, and more are showing up all the time. But no, nothing like Elm’s time traveling debugger, though nothing prevents it from existing because it’s really about the state in the socket.



Bruce Tate

Founder
Groxio


Idioms for building fault-tolerant applications with Elixir

This talk will introduce developers to Elixir and the underlying Erlang VM and show how they provide a new vocabulary which shapes how developers design and build concurrent, distributed and fault-tolerant applications. The talk will also focus on the design goals behind Elixir, use cases, and include some live demos.

Q&A

Question: Were you already thinking of BEAM as a target VM for this?

Answer: Elixir only exists because of the BEAM. I wanted specifically to use the Erlang VM!

More interesting tech notes about Whatsapp: at some point they crossed 1 billion active users daily and there were more messages going through Whatsapp in a given day than the whole global SMS system. Here is one reference: https://fortune.com/2017/07/28/whatsapp-one-billion-daily-users/


Question: Is it true that Elixir was originally supposed to be Ruby targeting BEAM? What is the true Elixir backstory?

Answer: Not really, it was never intended to be a Ruby port. Even at the beginning, when Elixir was object-oriented, the object model was different from Ruby's and it included features Ruby did not have, such as pattern matching.

Question: Oh, I didn’t know Elixir was OO at the beginning!

Answer: it was at the very beginning, before it was ever released. like in the very early prototype stages. but the code is all on the github repo. here is some link of how the List module object looked like at the time: https://github.com/elixir-lang/elixir/blob/84e60f6bbf7fb419f43684b3cdb6b1a3536e78d5/lib/list.ex but this is like... 1 month into the prototype. the elixir that is the elixir from today, was released ~15 months later


Question: Was there a point when you were writing code as AST and then building the Elixir language from AST, or did you do Elixir first then exposed the AST for macro programming? Or to rephrase, where did the awesome macro capability come in the history of the language?

Answer: It came very early on. I spent about 3 months on the original prototype, which was the OO-based one, and i considered it a failure. so i went out to research and investigate how to solve the problems i was facing (meta-programming, extensibility, etc). In this process, i dug more into Lisp, but we already had 2 or 3 Lisps at the time in the Erlang VM, so I thought I would try out having Lisp-like macros but without having the s-Lisp language.

Then I was building the AST and the macros side by side. here is a sketch i wrote early on exploring those ideas: https://github.com/josevalim/lego-lang

I also later learned that the original Lisp paper by John McCarthy had the same idea. there was a s-lisp, which was the AST, and the m-lisp, which was a higher level syntax that became s-lisp! But s-lisp was the one that really gained traction. (edited)

Btw, here is the Livebook project that I am demo-ing: https://github.com/elixir-nx/livebook/ - there is also a docker image now, so if you never played with Elixir before, you can install Livebook and have something to experiment with.


Question: Speaking of Nx, when you announced it a few weeks ago, I thought it was an April Fools joke at first :laughing:. Are you still actively working on that, or are you cooking up the next big thing?

Answer: haha, I believe Nx, Livebook, and numerical computing is likely to be my focus over the next years!

Question: do you have plans to combine Nx with Flow as an answer to Flink/Spark? I know the idea behind Flow is to solve “small data” problems, but with Nx you’ve shown that Elixir has the potential to do serious numerical computing, so why stop at “small data”?

Answer: perhaps we can do that! or perhaps it has to be something different that simply coordinates Nx work on the data. I haven't thought that far yet :)


Question: Could have Elixir being implemented in the JVM? Would have been able to get a similar performance?

Answer: Elixir really relies on the power of the Erlang VM, so it can only happen if someone ports all of the Erlang VM backbone to JVM, which is a huge amount of work. There was Erjang but I don't think it is still maintained. (the last commit was 5 years ago https://github.com/trifork/erjang)


Question: I was so excited when I realised the macros were quoted AST - for a while there it was my hammer and everything was a nail (I mostly recovered from that, but it’s still awesome)

Answer: Haha, yes! That's why the first rule of the macro club is don't use macros


Some more links:

  1. ExUnit test framework: https://hexdocs.pm/ex_unit/
  2. Ecto for query and data toolkit: https://hexdocs.pm/ecto/
  3. Nx for numerical computing and hardware acceleration: https://github.com/elixir-nx/nx/tree/main/nx#readme

And more links from the talk:

  1. Phoenix (apis and realtime interactive web apps): https://www.phoenixframework.org/
  2. Nerves (embedded): https://www.nerves-project.org/
  3. Membrane (audio/video streaming): https://www.membraneframework.org/

And of course, the Elixir website too: elixir-lang.org



José Valim

Chief Adoption Officer
Dashbit


Resource Analysis with Refinement Types

Liquid Haskell is an extension of Haskell’s Type system that allows annotating types with refinement predicates. It’s great for ensuring the correctness of your code, but it can also be used to improve the performance of your code.

If you track your resources then Liquid Haskell can be used to statically bound the resources needed at runtime, thus statically deciding how performant your code is. You are liquidating your assets.

To track resource we define a `Tick monad that ticks each time a resource (ranging recursive calls to thunks) is used. Then we use refinement types to statically approximate the number of ticks that can occur at runtime. This reasoning aids runtime code optimization, since it can be used to compare resource usage of two different programs.

In this talk, I will present this technique through small examples (sorting algorithms and mapping) and discuss the advantage and current limitations on real-world code adaptation.

Q&A

Question: Could be useful for proving bounds on open file descriptors in FFI code? (if you put the Ticks in the right places). Niki, the talk covered well the idea of generating ticks, but since some resources (FDs, memory, etc) can be released, how easy is it to model that concept with LH?

Answer: That sounds like a very good application!

Release is just a negative cost, so it is easy to model! There is no requirement for the cost to be non-negative; so just need to define release x = Tick -1 x.

BTW, the actual implementation give you a cost n x = Tick n x operator.


Question: Refinement types and the sort of theorem proving you're doing here seem to have some overlap with the dependent haskell efforts. Both attempt to prove more and more sophisticated properties using types. How do you see these efforts working together in the future?

Answer: It would be great for liquid types and dependent Haskell to work together. Though, it seems to be quite difficult, because they underlying used logics are quite different (SMT, classic logic for liquid types and Coq-style constructive logic for dependent Haskell).

So, there is not an active plan to merge the two reasonings, but indeed it would be amazing if we did!


Resources:

You can play with Liquid Haskell using our online demo: http://goto.ucsd.edu:8090/index.html#?demo=SimpleRefinements.hs

But, recently we turned it into a ghc-plugin, so you can use it by just adding the ghc-options: -fplugin=LiquidHaskell option on your cabal file (e.g., https://github.com/kosmikus/popl21-liquid-haskell-tutorial/blob/master/popl21-liquid-haskell-tutorial.cabal#L9).

The standard applications are checking termination and totality of your functions (these checks are performed by default) and list indexing properties. You can also use it in this “theorem-proving style” mode that I presented to prove any fancy program “meta-properties”.



Niki Vazou

Research Assistant Professor
IMDEA Software Institute


Cultivating an Engineering Dialect

Haskell has seen success in commercial environments, with teams of professional engineers choosing it for its claims of reliability, a rapid development pace, and easier maintenance over the long term. On top of that, a large community of hobbyist tinkerers and academic researchers are always releasing new and exciting abstractions, libraries, and language extensions, each offering improved ways to structure, build, and test our programs.

Engineering teams have diverse knowledge and skill levels, and new team members need to come up to speed to work effectively. This poses us a challenge: which abstractions, libraries, and language extensions should we choose from the ever-growing pool? How should we determine what level of Haskell to adopt? Should we always embrace the cutting-edge to squeeze out every advantage, leaving new hires in the dust? Should we reject novelty and focus only on the “simple” or “boring” ways of doing things, even if doing so gives up some potential effectiveness?

This talk will bring clarity to these questions. Rather than prescribe a uniform solution, we offer you the tools of thought to make informed, intentional decisions, and cultivate an engineering dialect that works for you.

Q&A

Question: What’s the biggest unexpected thing or mindshift you’ve had going from research Haskell --> corporate Haskell?

Answer: I worked on a research project for a year at the end of my time in university, on a code base I inherited from my supervisor. The code in that code base was simpler than "Simple Haskell"! Haskell written like it was Miranda, with no open source dependencies, error for error handling, and this sort of thing. When I took over, I used the coolest stuff I knew about and learnt some of these lessons. I wasn't thinking in these terms at that stage though, so I ended up with a bit of a mess I had to get myself out of, and learnt some of these lessons.

A benefit of corporate Haskell is that there tends to be more than one person at a time working on a given code base, and I think having a team (and more broadly a community) is critical.


Question: What about containment/partition? You're talking about weight/power ratio as if it were distributed over the codebase, but do you have advice about how to contain the risk so some of the high weight code can be contained?

Answer: I think this is possible to some degree, eg. by hiding things behind library boundaries, so all the GADTs live under the bonnet or something. I don't like this argument so much though, because I think everyone on the team needs to be able to work on all of the team's code, so eventually the person who doesn't know GADTs will have to open the bonnet before they can close their ticket. I also think, if taken to extremes, this risks creating a sort of caste system, where the elites who know the mystical secrets get to work on all of the libraries, and then the unenlightened write the business logic in the middle.

Question : I think a good example of this is the boundary between application and operations, where the speed at which people learn new tools can be different in each side.

Not because Iove Nix, but when I saw it I thought "one of these things is not like the others"

Your Makefiles can be hairy, and the intern still can type make foo.


I also wonder whether "everyone on the team needs to be able to work on all of the team's code" is one of those rules you have to know when and why to break.

In the Python world sometimes you have to go for ctypes and write some C glue, that's part of the job. But it's not the same language.

Constraints may be different in Haskell codebases.

Also I guess the risk of what you call "the caste system" may also be necessary as specialisation once your team grows.

It's ok to have someone write ML algorithms, someone else deploy them to production pipelines, and someone else write business logic using the results.

In your case, wouldn't it be ok for someone to maintain a DSL and others to consume it, if the team has grown enough? The libraries used in each side of this bargain might be different, and the risk calculus have different parameters.

Answer: They're probably similar to constraints in other languages, for the most part. GHC language extensions are a great example where they can change everything from the surface language to the typechecker, and everyone likes different sets of them.

Question: Not arguing, rather wondering whether we're thinking about different constraints. Seems like C++, where you have to agree on "which C++" before writing code, or you end up with widely different languages.

Answer: My talk mostly focused on a single team. I imagine these ideas would need subtle modification to work for multiple interacting teams (eg a DSL producer and consumer as you say)

Nix is a really complicated example for me. I have very strong mixed feelings about it. I find it to be the best-in-class solution, but I also have found it to have a non-negotiable learning cliff. If my primary role was deploying code, I suppose I would have the time to sit down and learn Nix and deeply as I've learnt Haskell, but I find that the only way to learn it is to read all four massive manuals (which have no definitive mutual ordering), plus blog posts, plus read 500 files called default.nix on github whenever I have a problem to solve, plus hang out on IRC. I've found that the error messages are inscrutable - usually when I make a simple mistake in my file, I will get an error "string is not a function" deep in the guts of some other file I've never heard of. Still, now that our Nix is in place, I can update and add dependencies and so on, and we have scripts to do a bunch of common tasks, and Isaac and Dave know Nix very deeply, so I have a community to draw on.

We've begun trying to document our nix usage in our handbook. It's been harder than Haskell. We need to put more effort into that problem soon.

I don't like Nix, and I'm glad we use it.


Question: What's your strategy for "There's this wacky technique but the one person attached to the technique that knows it well enough to teach it has left the company"?

If you have encountered that situation.

Answer: Right, I'm hoping that they did something like in this talk, where they linked to their favourite blog posts or talks before they left, and helped others learn. If not, I suppose you'll have to live with the pain of learning that technique unassisted, or rewrite the relevant parts of the code. Neither is usually very appealing :(



George Wilson

Functional Programming Engineer
PaidRight


Eliminating Bugs with Dependently Typed Haskell

Using dependent types in production Haskell code is a practical way to eliminate errors. While there are many examples of using dependent Haskell to prove invariants about code, few of these are applied to large scale production systems. Critics claim that dependent types are only useful in toy examples and that they are impractical for use in the real world. This talk analyzes real world examples where dependent types have enabled us to find and eliminate bugs in production Haskell code.

Q&A

Question: How did you find the ergonomics of using Dependent Typing in Haskell? eg compile times, error messages

Answer: I haven't done any scientific measurements, but anecdotally speaking compile times were not a problem. For error messages, it depends a lot on what you're doing. For basic usages of GADTs, you'll get a friendly "expected X, but got Y". If you start using a lot of type families and existential types, then you can run into some crazy stuff. There's one error that says "my brain just exploded", that one isn't so friendly


Question: Is the Thrift IDL Haskell implementation available as open source?

Answer: Yep! We open sourced it earlier this year! https://github.com/facebookincubator/hsthrift


Question: looking at hsthrift, it looks like the Util.* modules reimplement half of the Haskell ecosystem!

Answer: It's sort of meta. Those utility functions are for generating Haskell ASTs


Question: God, I love this sort of type level programming so much. It feels tedious and complex in small projects, but scales really well in my experience

Answer: Thanks! I agree, I think it does scale well and in fact it helps the entire project scale by adding more structure. One thing I didn't mention is that any sort of refactoring in these projects was super easy because the compiler would tell us everything that needed to be updated

If you like this stuff, you can see an example where I went totally overboard and tried to encode the runtime complexity of sorting algorithms in Haskell's type system. It worked ok for insertion sort, but the GHC constraint solver didn't like dealing with log very much

https://github.com/zilberstein/verified-complexity/blob/master/Algorithm/Sorting.hs


Question: Do you have any further suggestions on where the line crosses to "my brain just exploded"? And any evidence of how that has changed over time with newer releases of GHC?

Answer: "My brain just exploded" happens when existentially quantified type variables escape their defining scope. It can happen pretty easily when using GADTs because you often want to use an existential type to normalize the type of a GADT. It sometimes is easy to fix by adding a type signature to the output. Usually if you can't fix it easily, then you really are doing something unsafe. Still, the error message isn't that helpful in finding and fixing the bug.



Noam Zilberstein

Software Engineer
Facebook


SkillsCasts
Other Years


Thank you to our sponsors and partners


Platinum

Silver

Bronze