Skip to main content

Morepath 0.15 released!

Today the Morepath developers released Morepath 0.15 (CHANGES).

What is Morepath? Morepath is a Python web framework that is small, easy to learn, extensively documented, and insanely powerful.

This release is a smaller release without big visible changes, or big hood changes. Instead it polishes a lot of stuff. It also continues the trend with contributions from multiple core developers.

This release prepares the way for the next Morepath release. To this end we've deprecated a number of APIs. We are preparing a big change to the underlying Reg predicate dispatch library APIs that should make it less implicit, less magic, and make it perform slightly better. Stay tuned!

Impressions of React Europe 2016

Last week I went to the React Europe conference 2016 in Paris. It was a lot of fun and inspirational as well. I actually hadn't used React for about 6 months because I've been focusing on server-side stuff, in particular Morepath, but this really makes me want to go and work with it again (I'm available). I especially enjoy the creativity in the community.

In this post I want to give my impression of the conference, and highlight some talks that stood out to me. There are actually too many to highlight here: I thought the talk quality of this conference was very high. I also appreciated the wide range of topics -- not everything was about React directly. More of that, please!


I was quite worried about travel this year. I'm in the Netherlands, so it all should be so easy: hop in a train to go to Rotterdam, a 45 minute ride. Then take the Thalys that speeds from Rotterdam to Paris in about 3 hours. In total it takes about 4 hours. Awesome.

But it's strike season in France. Railway strikes were threatened. And then there was a railway strike in Belgium, through which the train passes, on the day I was to travel. Uh oh. I already got some warnings in the days in advance about possible train cancellations due to the strikes. But my train was still going.

But travel in the Netherlands at least wasn't disrupted, so I wasn't worried about that. I made it in time to the normal intercity train that brings me from my home town, Tilburg, to Rotterdam. Found a comfortable seat. All ready to go. Then an announcement: please leave the train as it can go no further. A cargo train had broken down ahead of us. Argh!

In the end I managed to get to Rotterdam and catch a later Thalys, and made it to Paris, just 2 hours later than I'd planned.

I was also worried about announced strikes in the Paris metro system on the day of the conference. Getting around in Paris is very convenient with the metro, but not if it isn't going. In the end the metro wasn't affected.

What I did not anticipate was the whole flood situation, to the point where they had to move parts of the inventory of the Louvre. But Paris is a big city and the floods did not affect me.

So in the end what I worried about never happened and stuff happened that I didn't worry about at all.

Hackathon and MobX

Like last year there was a hackathon one day ahead of the conference at the Mozilla offices in Paris.

Last year's hackathon was special: I met up with Lee Bannard and we worked on reselect, which became quite a popular little library for use with Redux. You might enjoy my story on that.

I was very happy to see Lee again at this year's hackathon. We didn't create any new code this time; we spent most of our time learning about MobX, which I first heard about that day. We met Devin Pastoor at the hackathon. He already had a little app that used MobX that he wanted to work on together. Lee and myself helped a little with it but then got distracted trying to figure out how MobX's magic works.

MobX is a state management library, typically used with React, that takes a different approach than the now dominant library for this, Redux. Mobx lets you use normal OOP style objects with state and references in your client-side model. Unlike Redux is does not require you to normalize your state. MobX observes changes to your objects automatically and is very clever about only updating the parts of the UI that are affected.

This gives MobX different tradeoffs than Redux. I haven't used MobX in practice at all, but I would say MobX is less verbose than Redux, and you get more performance out of the box automatically. I also think it would be easier for newcomers to adopt. On the other hand Redux's focus on the immutable state constraint simplifies testing and debugging, and opened up a rich ecosystem of extensions. Redux's implementation is also a lot simpler. People want a simple answer to "what is better", but these really are tradeoffs: which way is the right way for you and applications depends on who you are and what you are working on.

Sorry folks, Lee and I created no thing new this time. But we had fun.

Dan Abramov: The Redux Journey

Dan Abramov, one of my open source heroes, gave a very interesting talk where he talked about the quick and wild ride Redux has been on since last year. Redux was indeed everywhere at this conference, and people were building very cool stuff on top of it.

Dan explained how the constraints of the Redux architecture, such as reducers on immutable state, also lead to its stand-out features, such as simple debugging and persisting and sharing state. He also spoke about how Redux's well-defined minimal contracts help its extension and middleware ecosystem.

Dan's talk is interesting to anyone who is interested in framework design, even if you don't care about React or Redux at all.

Watch "The Redux Journey"

Eric Vicenti: Native Navigation for Every Platform

This talk was about using a Redux-style approach to manage navigation state in mobile apps, so that the app can react appropriately to going "back", and helps with links within the app as well as it responding to links from other apps. Web folks have deep experience with links and is an example of how bringing web technology to mobile app development can also help make mobile app development easier.

Watch "Native Navigation for Every Platform"

Lin Clark: A cartoon guide to performance in React

Lin Clark gave a great talk where she explained why React can be fast, and how you can exploit its features to help it along. Explaining complex topics well to make them seem simple is hard, and so I appreciated how well she accomplished it.

If you are new to React this is a really good talk to watch!

Watch "A cartoon guide to performance in React"

Christopher Chedeau: Being Successful at Open Source

Christopher described what strategies Facebook uses when they open source stuff. I especially liked the question they made sure to ask: "What did you struggle with?". Not "what do you want?" as that can easily devolve into a wishlist discussion, but specifically asking about problems that newcomers had. One fun anecdote: the way to make a FAQ go away was not writing more documentation but changing an error message.

I also liked the clever "fake it until you make it" approach to making a community appear more active than it is in the early stages, so that it actually becomes active. One trick they used is to ask people to blog about React, then publish all those links on a regular basis.

As an individual developer who occasionally open sources stuff I must point out rule 0 for open source success is "be a huge company with lots of resources like Facebook". Without those resources it is a much bigger struggle to build an open source community. (It also doesn't help that with Morepath I picked a saturated space: Python web frameworks. It's hard to convince people it's innovative. But that was the same problem React faced when it was first released.) (UPDATE: of course Facebook-level resources are not required for open source success, there are a lot of counter examples, but it sure helps. The talk mentions a team of multiple people engaging the community through multiple routes. A single individual can't replicate that at the start.)

Nevertheless, open source success for React was by no means guaranteed, and React helped make Facebook's reputation among developers. They made the React open source community really work. Kudos.

Watch "Being Successful at Open Source"

Dan Schafer: GraphQL at Facebook

I liked Dan Schafer's talk: a nice quick recap of why GraphQL is the way it is, some clear advice on how to deal with authorization in GraphQL, then a nice discussion on how to implement efficient queries with GraphQL, and why GraphQL cache keys are the way they are. Clear, focused and pragmatic, while still going into to the why of things, and without overwhelming detail.

Watch "GraphQL at Facebook"

Jeff Morrison: A Deepdive into Flow

This talk was about the implementation of Flow, which is a type checker for JavaScript code. This is a very complex topic involving compiler technology and type inferencing, and was still amazingly clear. The talk gave me the nice illusion I actually understand how Flows works. It also makes me want to try Flow again and integrate it into my development tool chain.

Flow is a general JavaScript tool. It comes out of Facebook but is not directly connected to React at all, even more so than something like GraphQL. I really appreciated that the organizers included talks like this and mixed things up.

Watch "A Deepdive into Flow"

Cheng Lou: On the Spectrum of Abstraction

This talk, which isn't about React, really stood out for me, and from what I heard also resonated with others at the conference. It tied neatly into the themes Dan Abramov already set up in his opening talk about Redux. Dan told me later this was not actually coordinated. The ideas are just in the air, and this speaks for the thoughtfulness of the React community.

Cheng Lou's talk was a very high level talk about the benefits and the costs of abstraction. This is something I care about a lot as a developer: how do I avoid over-engineering and under-engineering (I've written about it before), and solve problems at the right level? Software has many forces on many levels pulling at it, from end-users to low-level details, and how do you balance out these forces? Engineering is so much about dealing with tradeoffs. How do you even communicate about this?

The next day I had an interesting chat with Cheng Lou about his talk, where he discussed various things he had to cut out of his talk so it wouldn't be too long. He also mentioned Up and Down the Ladder of Abstraction by Bret Victor, so that is now on my reading list.

I highly recommend this talk for anyone interested in these topics.

Watch "On the Spectrum of Abstraction"

Preethi Kasireddy: Going from 0 to full-time software engineer in 6 months

This was a 5 minute lightning talk with a personal story: how overwhelming software development is to a newcomer and how it can nonetheless be learned. During the talk I was sitting next to someone who was relatively new to software development himself and I could see how much this talk resonated with him.

Preethi Kasireddy also encouraged more experienced developers to mentor newcomers. I've found myself that mentoring doesn't have to take a lot of time and can still be hugely appreciated. It's fun to do as well.

A new developer is often insecure as there are just so many things to grasp, and experienced developers seem to know so much. Ironically I sometimes feel insecure as an older, more experienced developer as well, when I see people like Preethi learn software development as quickly as they do. I certainly took more time to get where they are.

But I'm old enough to have gotten used to intimidatingly smart younger people too. I can keep up. The Internet overall helps with learning: the resources on the Internet for a new developer may be overwhelming, but they are also of tremendous value. Preethi called for more intermediate-level resources. I am not sure this series I wrote counts; I suspect Preethi is beyond it, but perhaps others will enjoy it.

(Video not up yet! I'll update this post when it is.)

Jonas Gebhardt: Evolving the Visual Programming Environment with React

This was another one of those non-React talks I really appreciated. It is related to React as it is both inspired by functional programming patterns and component-based design, but it's really about something else: a UI to construct programs by connecting boxes with arrows.

There are many of these around. Because these don't seem to ever enter the daily life of a programmer, I tend to be skeptical about them.

But Jonas Gebhardt acknowledged the prior art, and the approach he described is pragmatic. An open world approach in the web browser, unlike many of the "we are the world" sandbox implementations from the past. Annotated React components can serve as the building blocks. He even sketched out an idea on how to connect UI input and output to custom user interfaces in the end.

So I came away less skeptical. This approach has potential and I'd like to see more.

Watch "Evolving the Visual Programming Environment with React"

Bonnie Eisenman: React Native Retrospective

I really like retrospectives. This was an excellent talk about the history of React Native over the course of the last year and a half. React Native is the technology that lets you use JavaScript and React to develop native iPhone and Android apps. Bonnie Eisenman also wrote a book about it.

React Native is a potential game changer to me as it lets people like me use our deep web development experience to build phone apps. The talk made me excited to go and play with React Native, and I'm sure I wasn't the only one. In a chat afterwards, Bonnie confirmed that was a goal of her talk, so mission accomplished!

Watch "React Native Retrospective"

Phil Holden: subdivide and redux-swarmlog

Phil Holden gave a 5 minute lightning talk, but please give him more space next time. He discussed Subdivide, an advanced split pane layout system for React, and then also discussed another mind-blowing topic: using WebRTC to create a peer to peer network between multiple Redux frontends, so that they share actions. This lets users share data without a server being around. This he packaged as a library in a package called redux-swarmlog.

I've been thinking about peer to peer serverless web applications for some years as I believe they have the potential to change the web, and Phil's talk really reignited that interest. Peer to peer is hard, but the technology is improving. Later that day, I had the pleasure of having a brief chat with Phil about such wild topics. Thanks Phil for the inspiration!

(Video not up yet! I'll update this post when it is.)

Andrew Clark: Recomposing your React application

Andrew Clark is Internet-famous to me, as he created Flummox, the Flux state management framework I used before switching to Redux (Andrew in fact co-created Redux). In this talk he discusses recompose, a library he wrote that helps you do sophisticated things with pure, stateless function components in React. I need to play with it and see whether it fits in my React toolbox. Andrew also described the interesting techniques recompose uses to help reduce the overhead of small composed functions -- this highlights the properties you gain when you stick to the pure function constraint.

Watch "Recomposing your React application"

Jafar Husain: Falcor: One Model Everywhere

When multiple development teams have a similar idea at about the same time, that may be a sign the idea is a good one. This happened to me when I came up with a client-side web framework few years ago, thought I was onto something new, and then Backbone emerged, followed by many others.

Jafar Husain in this well-done talk described how Falcor and GraphQL were a similar solution to similar problems. Both Falcor and GraphQL let the client be in control of what data it demands from the server. He then highlighted the differences between Falcor and GraphQL, where he contrasted Falcor's more lightweight approach to GraphQL's more powerful but involved focus on schemas. It's tradeoffs again: which fits best depends on your use cases and team.

Watch "Falcor: One Model Everywhere"

Laney Kuenzel & Lee Byron: GraphQL Future

This was a wide-ranging talk that went into various issues that GraphQL team at Facebook is trying to solve, mostly centered about the need to receive some form of immediate update when state on the server changes. Laney and Lee presented various solutions in a various states of readiness, from mostly untested ideas to stuff that is already deployed in production at Facebook. Very interesting in you're interested in GraphQL at all, and also if you're interested in how smart people tackle problems.

Watch "GraphQL Future

Constructive feedback

In my blog post last year I was clear I enjoyed the conference a lot, but also engaged in a little bit of constructive criticism. I don't presume that the React Europe organizers directly responded to my feedback, but let's see how they did anyway and give a bit more feedback here. My intent with this feedback is to do my bit to make a great conference even better.


Last year the conference was in early July in Paris and it was 40 degrees Celsius. The React Europe team responded by shifting conference a month earlier. It was not too hot: problem solved.


Last year the hackathon assumed people were going to compete in a contest by default instead of cooperate on cool projects. This year they were very clear that cooperation on cool projects was encouraged. Awesome!

Still, I found myself walking around Paris with a friend on Friday night trying to find a quiet place so we could look at some code together. We enjoyed the conversation but we didn't find such a place in the end.

This is why I prefer the approach Python conferences take: a 1-3 day optional sprint for people to participate in after the conference has ended. Why I like afterwards better:

  • You can get involved in cool projects you learned about during the conference.
  • You can get to know people you met during the conference better.
  • Since there is no pressure it's a good way to wind down. Speakers can participate without stressing out about a talk they will be giving soon.

Facebook speakers

Many of the speakers at this conference work for Facebook. They gave excellent talks: thank you. I understand that having a lot of speakers from Facebook is natural for a conference on React, as that's where it originated. (and Facebook hires people from the community). But this is an open source community. While I realize you'd take on more unknown quantities and it would be more difficult to keep up the quality of the talks, I would personally enjoy hearing a few more voices not from Facebook next year.

Gender diversity

Last year I spoke about a bit gender diversity at the conference. This year there were more female speakers than last year (keep it up!), but male voices were still the vast majority. Women speakers are important in helping women participants feel more welcome in our conferences and our community. We can still do a lot better: let's learn from PyCon US.

Back home

The train ride back home on Saturday morning was as it should: uneventful. Left the hotel around 9 am the morning, was back home around 2:30 pm. I came home tired but inspired, as it should be after a good conference. Thanks so much to the organizers and speakers for the experience! I hope you have enjoyed my little contribution.

Morepath 0.14 released!

Today we released Morepath 0.14 (CHANGES).

What is Morepath? Morepath is a Python web framework that is powerful and flexible due to its advanced configuration engine (Dectate) and an advanced dispatch system (Reg), but at the same time is easy to learn. It's also extensively documented!

The part of this release that I'm the most excited about is not technical but has to do with the community, which is growing -- this release contains significant work by several others than myself. Thanks Stefano Taschini, Denis Krienbühl and Henri Hulski!

New for the community as well is that we have a web-based and mobile-supported chat channel for Morepath. You can join us with a click.

Please join and hang out!

Major new features of this release:

  • Documented extension API
  • New implementation overview.
  • A new document describing how to test your Morepath-based code.
  • Documented how to create a command-line query tool for Morepath configuration.
  • New cookiecutter template to quickly create a Morepath-based project.
  • New releases of various extensions compatible with 0.14. Did you know that Morepath has more.jwtauth, more.basicauth and more.itsdangerous extensions for authentication policy, more.static and more.webassets for static resources, more.chameleon and more.jinja2 for server templating languages, more.transaction to support SQLAlchemy and ZODB transactions and more.forwarded to support the Forwarded HTTP header?
  • Configuration of Morepath-based applications is now simpler and more explicit; we have a new commit method on application classes and applications get automatically committed during runtime if you don't do it first.
  • Morepath now performs host header validation to guard against header poisoning attacks.
  • New defer_class_links directive. This helps in a complicated app that is composed of multiple smaller applications that want to link to each other using the request.class_link method introduced in Morepath 0.13.
  • We've refactored both the publishing/view system and the link generation system. It's cleaner now under the hood.
  • Introduced an official deprecation policy as we prepare for Morepath 1.0, along with upgrade instructions.

Interested? Feedback? Let us know!

Morepath 0.13 now with Dectate

We just released Morepath 0.13 (changes). Morepath is your friendly neighborhood Python web framework with super powers, and with 0.13 it has gained a significant power upgrade.

This is the first Morepath release of 2016 and the biggest Morepath release in a while. The major change in Morepath 0.13 is that it is now build on the Dectate meta-framework for configuration.

Morepath's configuration system is finally documented in the form of Dectate. Developers can extend Morepath with new configuration directives and new configuration registries and they behave exactly like the native ones. They're built the same way.

Dectate offers powerful features that I believe takes Morepath's decorator-based configuration system far beyond what you can do with most other web frameworks, which typically use a Python file for configuration, or use an ad-hoc decorator-based system. Too bad almost nobody seems to realize how much power this brings to the developer... A query tool for configuration, for instance.

The only framework with an equivalent system is Pyramid, but I think Morepath still has some features it does not: Morepath allows multiple independent configurations in the same run-time, for instance.

With the introduction of Dectate we've dropped Morepath's dependency on Venusian. Venusian was certainly valuable to Morepath, but over time we started to have some issues with it: its requirement to scan Python code was a barrier for beginners, and in some cases it could impact performance.

Dectate does not require scanning of packages in order to find registrations, but it can certainly be handy to be able to do so, as you can't miss any stray decorators in modules you didn't import anywhere else. Morepath now supports it through the new importscan dependency. importscan defines a recursive import function extracted from Venusian.

Dectate: advanced configuration for Python code

Dectate is a new Python library. It's geared towards framework authors. It's a meta-framework: a framework you can use to easily construct robust and powerful Python frameworks.

So what's a framework anyway? A framework is a system that you supply with code and then it calls it at the appropriate times. Don't call us, we'll call you!

What does this look like in practice? Let's imagine you're building a web framework, and you want the people that use your framework to provide routes and functions that generate responses for those routes:

def foo_view(request):
    return "Some response!"

This hypothetical web framework then interprets HTTP requests, matches the path of the query with /foo, and then calls the function foo_view to generate the response. Once the response is generated, it sends it back as a HTTP response.

In the abstract, the developer that uses the framework uses it for code configuration: you supply some functions or classes along with some configuration meta data. The framework then uses this code at the appropriate times.

So why would you, the framework author, need a meta framework to implement route? You just create a Python decorator. When it's called you just register the path and the function with some global registry somewhere. Yeah, yeah, "you just", we have heard that before. You could indeed just do that, but perhaps you want more:

  • What if the developer that uses your framework uses route('/foo') in two places? Which one to pick? Does the last one registered win or should this be an error? If the framework should pick the last one, what is the last one? Does this depend on import order?
  • What if there's an error? What if there is some configuration conflict or perhaps your framework decides the developer passed in bad meta data? Ideally you'd like to tell the developer that uses your framework exactly what decorators where have the problem.
  • Perhaps you want to allow reuse: a developer can define a whole bunch of routes and then extend them with some extra routes for particular use cases.
  • Perhaps you want to allow overrides: a developer can define a whole bunch of routes but then override specific ones for particular use cases.
  • Perhaps you want your framework to be extensible with new decorators and new registries. How do you allow this in a way that still allows reuse, overrides and error reporting?

Dectate takes care of all that stuff. It is a documented and well-tested library, and it works for Python 2 and Python 3 code.

Dectate is a spin-off from the Morepath web framework. Morepath is great and you should use it. Morepath has had a sophisticated configuration framework for some years now, but it had grown new features over time, which resulted in a bit of cruft and it also was not well documented. To remedy that and make some other improvements, I've spun it into its own independent library now: Dectate. You can read more about Dectate's history here; Dectate is an expression of many lessons learned over a long time.

It is my hope that Dectate goes beyond Morepath and will be considered by other framework authors. Maybe someone will create a Dectate-based configuration system for other web frameworks such as Django or Flask or Pyramid. Or perhaps someone will use Dectate for some new framework altogether, perhaps one not at all related to the web. Maybe you will! Let me know.

JavaScript Dependencies Revisited: An Example Project


A few years ago I wrote an article on how I was overwhelmed by JavaScript dependencies. In it I explored the difficulty of managing dependencies in a JavaScript project: both internal modules as well as depending on external packages. There were a ton of options available, and none of them seemed to entirely fit what I wanted. I followed up on it going into the problem in more depth. Typical for JavaScript there were a lot of different solutions for this problem.

The article on how I was overwhelmed is consistently one of the most read articles on this blog, even though the overwhelming majority of posts over the years is actually on Python. Apparently a lot of people are overwhelmed by JavaScript dependency management.

It's time for an update. A few years in JavaScript time is like 10 years in normal years, anyway. Everything changed like five times over since then.

What changed in JavaScript

So let's go through some of the changes:

One of the most important changes is that JavaScript now has a standard way to do imports in the ES6 version of the language, and that people are using it in practice, through transpilers like Babel.

Another change is that using npm and CommonJS packages has been emerging as the most popular way to do client-side dependency management, after being the dominant tool on the server already. In fact, back in 2013 people were already suggesting I use npm and a bundling tool (like Browserify), and I was resistant then. But they were right. In any case, it was already clear then that CommonJS was one of the most structured ways to do dependencies, and it's therefore no surprise this lead to a great tooling ecosystem.

Source map support in browsers has also matured. One of the reasons I was resistant to a compile-time step is that debugging using the browser introspector becomes more difficult. Now that source maps are pretty well established, that is less of a problem. It's still not as good as debugging code that doesn't need a compilation step, but it's tolerable.

While I'd like to be able to do without a compilation step, I need performance, until the adoption of HTTP 2 makes this less of a concern. I want to use ES6 and JSX anyway, so some kind of compilation step cannot be avoided anyway.

A Bundling Example

Last week I talked to Timo Stollenwerk about bundling tools. He asked me to put a little example project together. So I created one: it does bundling through Webpack, lets you use modern ES6 JavaScript through Babel and has eslint support.

There are a ton of example JavaScript projects out there already. A lot of them have quite a bit of JavaScript code in them to drive tools like gulp or grunt -- something I don't really like. I prefer declarative configuration and reusable libraries, not custom JavaScript code I need to copy into my project. These projects also tend to have quite a bit of code and little documentation that tells you what's going on.

While creating my example project, I went a bit overboard on the README. So this example is the opposite of many others: a lot of documentation and very little code. In the README, I go step by step explaining how to set up a modern client-side JavaScript development environment.

The choices of tools in the project are my own -- they're stuff that I've found works well together and is simple enough for me to understand. Many alternatives are possible.

I hope it is of help to anyone! Enjoy!

The Incredible Drifting Cyber


It's been interesting how the prefix cyber has drifted in meaning over the years. Let's explore together.

I wrote two thirds of this article and then I discovered Annalee Newitz was way ahead of me and wrote about the same thing two years ago. Since my article has different details I decided to finish it and put it on my blog after all. There's plenty of room in cyberspace. But read Annalee Newitz's article too!

Ancient Times: κυβερνητική

The ancient Greeks had a bunch of "kybern-" words. κυβερνάω (kybernao) means "to steer", and the Greek words for ship's captain and government are related. κυβερνητική (kybernetike) was used by Plato to mean governance of people.

So kybern- stuff was about steering and governance.


In 1948 Norbert Wiener coined the word "cybernetics" in English based on the Greek word κυβερνητική. Wiener was a mathematician who worked on the automatic aiming of anti-aircraft guns during World War II. Wiener started to think about the general principles of control systems. I appreciate how he extracted some good from thoughts about guns.

A very simple control system most people are familiar with is a thermostat: when the temperature falls below a certain set value, it turns on a heater until the temperature is back at the required value again. We find many more complex control processes in living organisms: such blood sugar regulation as body temperature regulation.

Wiener called the study of control systems cybernetics, and investigated general principles behind control systems in a lot of different areas: from electronics to biology to psychology. He foresaw a lot of later developments in computer technology. Interdisciplinary thought like this can be very fruitful.

Wiener's work on cybernetics was quite inspirational in a range of fields, causing the meaning of the word "cybernetics" to become as stretched as "chaos theory" was for a while in the 1990s. Such is the risk of interdisciplinary studies.


At first we didn't have the cyber prefix. We just had cyb.

We move into the space age. In 1960, two researchers, Manfred Clynes and Nathan Kline, also known as the Klynes (okay I just made that up), published an article about the idea of adapting human bodies with cybernetic technology to make them more suitable for the environment of space. With the right combination of drugs and machinery we could adjust humans so they can deal with long duration space voyages. They named the resulting combined cybernetic organism a "cyborg".

They realized that such a cyborg might go mad floating in the vastness of space:

Despite all the care exercised, there remains a strong possibility that somewhere in the course of a long space voyage a psychotic episode might occur, and this is one condition for which no servomechanism can be completely designed at the present time.

Since the cyborg might refuse to take medication voluntarily, they can still pump them full of drugs remotely:

For this reason, if monitoring is adequate, provision should be made for triggering administration of the medication remotely from earth or by a companion if there is a crew on the vehicle.

It sure was a document of its time, including "space race" reference to possible competing Soviet research into the cyborg topic. Oh no, they might be ahead of us! The article concluded that:

Solving the many technological problems involved in manned space flight by adapting man to his environment, rather than vice versa, will not only mark a significant step forward in man's scientific progress, but may well provide a new and larger dimension for man's spirit as well.

Today we see many examples of what could be described as "cyborg" technology, though we aren't taking it as quite far as these researchers imagined yet. We don't have the technology.

Cyber in Science Fiction

The idea of the the human/machine hybrid predates World War II in science fiction, but these researchers gave it a name that stuck.

This is where the cyber prefix starts entering pop culture. Doctor Who in 1966 introduced the Cybermen, biological organisms that have replaced most of their bodies with cybernetic parts. They're cyborgs, and nasty ones: they proceed to forcibly convert victims into more Cybermen. In the 1980s a similar concept was introduced into Star Trek as the Borg. Just as Star Wars turned the older word "Android" into "Droid", Star Trek turned "Cyborg" into "Borg".

So cyber is about cybernetic organisms. Not all of it though: Cybernetics crosses into many disciplines, so it was easy for cyber to become associated with computers and robots as well: Cybertron is the home world of giant robots that can transform into stuff. It involves lots of explosions somehow. Or Cyberdyne systems, who create Skynet. It creates a Governator (government again!) that is sent back in time.

Cyberpunk and Cyberspace

But we're getting ahead of ourselves. In the early 1980s, the prefix cyber appears in a new subgenre in science fiction, Cyberpunk. Gone are the gleaming towers, the distant worlds and silver bodysuits of earlier science fiction imagery. Cyberpunk is "high tech and low life" -- the radical collision of high technology with the street. We know this today as today, though we're a lot less cool with our smartphones than the mirror-shaded cyber-implanted street toughs envisioned by Cyberpunk fiction.

The seminal work of Cyberpunk fiction is Neuromancer by William Gibson, from 1984. In it Gibson coins the word Cyberspace, which doesn't have boring HTML but instead uses virtual reality to navigate data, as that's just so much cooler.

Cyber was now associated with digital spaces. The Internet was coming. Now the floodgates are open and we're ready for the 1990s.

Riding the 1990s Cyber Highway

The Internet has a longer history, but it really speeds into popular consciousness in the early 1990s. One day it's all boring computer stuff nobody cares about except a few geeks like me, the next day you hear people exchange their email address in the bus. Journalists and academics, never afraid to write articles about neat new stuff they can play around with at work (and why not?) produce massive quantities of new words with the prefix cyber. Cyber is cool.

The Internet is bright and new. Nobody has heard of spam yet, and email viruses are still a hoax.

So we are homesteading the cyberfrontier, found new cybercorporations. Are we building a cyberutopia or a cyberghetto with a cyberelite? Will we one day all ascend into cyberimmortality?

You see where this is going. By 1999 people are calling the prefix "terminally overused". Cyber is now uncool.

To Cyber

The cyber prefix then takes a surprising turn and turns into a full-fledged word all by itself! In the late 1990s cyber becomes a verb: to have cybersex in online chat. "Wanna cyber?" Words like "cyberutopia" and "cyberelite" can now elicit a snicker.

It was not to last, though we should still pretend it is, as cyber is about to take a dark turn.

Cyber Turns Dark

Apparently blissfully unaware of the naughty meaning of the word, the cyber prefix has in recent years become re-purposed by serious people for bad stuff that happens online: cybercrime, cyberbullying, cybersecurity, cyberwar. Or maybe it is the other way around, and only with enough distance from fun naughty things can the prefix cyber still hold a useful meaning, and it's such an unfun word now exactly because there was an association with naughty stuff previously.

And after all, we've associated dehumanizing technology with bad stuff in science fiction since at least Frankenstein's Monster, and the prefix cyber has been used in that context for years. Cybermen are still bad guys.

This is a dark turn for cyber. I don't like living in a world of cybersecurity. I imagine at a cybersecurity conference overly serious people discuss about how to take more privacy away from citizens, in the shadow of ominous threats they don't quite understand. Unless they are busting into hotel rooms while you're nude. (really. In my own country, no less!)

Will the word cyber remain dark now that the dour people have clenched their fists around it? The word has been versatile so far, but I'm not optimistic. In any case we do need to get some of that 1990s cyberspirit back, and then we can perhaps, again, work on that 1960s new and larger dimension for the human spirit. Or at least have fun.

A Brief History of Reselect


Reselect is a library with a very brief history that until a few days ago was sitting on my personal Github account. What does it do? It's not very relevant for this blog post, as I'm more interested in its history as an open source project. But briefly: it's a JavaScript library. It lets you automatically cache expensive calculations in a client-side web application.

Reselect was born in early July of this year. Since then, in less than four months, it gained more than 700 stars on Github. To compare: I have another project that I spent a few years working on and promoting. I blogged about it, spoke about it at two conferences, and at a meetup or two. It has less than 200 stars.

So I thought it might be instructive and amusing to trace the brief history of this little library.

Context: JavaScript

Reselect is a JavaScript library. JavaScript has been a very fast moving space these last few years. The JavaScript community moves so quickly people make jokes about it: new JavaScript MVC frameworks get invented every day. I actually created my own way back before it was cool; I'm a JavaScript MVC framework hipster.

Tooling for both client and server JavaScript changes very quickly. In the last year have we seen the emergence of ES6, also known as ES2015, which only got formally ratified as a standard last June. It's a huge update to the JavaScript language.

JavaScript is in the interesting position of being a lingua franca: it's a language many developers learn, coming from a wide range of different programming communities. Ideas and values from these different programming communities get adopted into JavaScript. One of the most interesting results of this has been the adoption of functional programming techniques into JavaScript.

I actually find myself learning about more advanced functional programming techniques now because I am writing a front-end web application. This is cool.

Context: React

React is a JavaScript library that came out of Facebook. It lets you build rich client-side web applications. It's good at doing UI stuff. I like it.

The React community is very creative. React rethought how client-side UIs works, and now people are rethinking all kinds of stuff. Some of these new thoughts will turn out to be bad ones. That's normal for creativity. We're learning stuff.

When you have a UI you have to manage the UI state: UI state has to be kept in sync with state on the server somehow. A UI may need to be updated in several places at once.

Last year Facebook presented this new way to managestate they called Flux. It's inspired by some functional programming ideas. Interestingly enough they didn't release code at first, just talked about the basic idea. As a result in typical JavaScript fashion a thousand Flux implementations bloomed.

Then in May of this year, Dan Abramov announced on Twitter "Oh no am I really writing my own Flux library".

Context: Redux

A few words about Dan Abramov. He's smart. He's creative. He's approachable. He's intimidatingly good at open source, even though he's been around for only a short time. He connected to everybody on Twitter, spreading knowledge around. He built very useful libraries.

So in a few short months, at typical JavaScript speed, Redux turned from "yet another Flux library" to what's probably the most popular Flux implementation around today. Dan rethought the way Flux should work, applied some more functional programming concepts, created awesome tooling, and people liked it.

I give it away early, but the rise of Redux explains why Reselect became so popular so quickly: it's riding the coattails of Redux. Dan Abramov promoted it with his Twitter megaphone, and included a reference to it in the Redux documentation as a way to solve a particular class of performance problems.

Reselect prehistory

So how did Reselect come about?

In early June, the problem was identified in the Redux issue tracker.

In late June Robbert Binna came out with a pull request:

The birth of Reselect

In the beginning of July, the first React Europe conference was held in Paris. I had signed up to the Hackathon to be held one day before the conference. I was almost more excited about this than the conference itself, because I have very good experiences with collaborative software development meetups. You can really get to know other developers, learn a lot, and build cool new open source infrastructure stuff. It lets you get out of the details of your day to day project and step back and solve deeper problems in a good way.

But a few days before my departure I learned to my dismay that this Hackathon was supposed to be a contest for a prize, not a collaborative hacking session. I grumbled about this on the Redux chat channel. I asked whether anyone was interested in hacking on infrastructure stuff instead.

Luckily Lee Bannard (AKA ellbee) replied and said he was! So we arranged to meet up.

So on the day of the Hackathon the two of us sat outside of the main room where everybody was focused on the contest. Lee had the idea to work on this calculation caching issue. I was a complete noob about this particular problem, so he explained it to me. He had experience with how NuclearJS tackled the problem, and knew about the pull request for Redux.

We also met Dan Abramov there, so we could talk to him directly. He was reluctant to add new facilities to Redux itself, especially as we were still exploring the problem space. So we decided to create a new library for it that could be used with Redux: Reselect.

We sat down and carefully examined the pull request. At first I was "this is all wrong" but the longer we looked I realized the code was actually quite right and I was wrong: it was elegant, doing what was needed in a few lines of code.

We refactored the code a little, making it more generic. We cleaned up the API a bit. We also added automated tests. Along the way we needed to check it in somewhere so I put it on my personal Github page. At the end of the day we had a pretty nice little library.

I enjoyed the conference a lot, went home and wrote a report.

Reselect afterwards

I got busy with work. While I glanced at Reselect once every while I had no immediate use for it in my project. I quickly gave Lee full access to the project so I wasn't going to block anything. Lee Bannard proved a very capable maintainer. He also added a lot of documentation. Dan used his megaphone, and Reselect got quite a few users quickly, and a lot of small contributions. The official Redux documentation points to Reselect.

And as my work project progressed, I found out I did have a use case for it after all, and started using it.

I did not create Reselect

Since it was on my Github account, people naturally associated me with the project, and assumed it was mine. While that's very good for my name recognition, it didn't sit right with me. I spent a day helping it into existence, and a few hours more afterwards, but more credit should go to others. It's very much a collaborative project.

I brought this up with Lee a few times. He insisted repeatedly he did not mind it being on my Github account at all. Now I think he's awesome!

To Rackt

But last week, as people told me explicitly "hey, I use your Reselect library!" I figured I should do something about it. Redux had already moved into the Rackt project to make it a true community project. Rackt is a kind of curated collection of React libraries, and a community group that maintains them. Technically there is not much difference to hosting the project on my Github account, but such details do matter: if you want community contributions in an open source project, it helps to show that it's a genuine community project.

So I proposed we should move Reselect too, and people agreed it would be a good idea. It turned out the Rackt folks were happy to accept Reselect, so a few Tweets later the deed was done and Reselect now lives in its new home.


What lessons can we learn from this?

Collaborative hacking sessions are a good thing. They're common in the Python world and called "sprints". They're regularly organized in association with Python conferences. At React Europe we had a micro sprint: just two people, myself and Lee, working on infrastructure code, with a bit of encouragement from Dan. Imagine what could be accomplished with a larger group of people?

I admit the circumstances were special: Lee picked an idea that was ready and could be done in a day. It also might have helped I have previous sprint experience. Not every sprint has the same results. But at the very least people get to know each other and learn new things. I think a collaborative goal works better for this than a contest. One major reason we go to open source conferences after all is to meet new people and see old friends again, and a sprint can help with that. I've made life-long friends through sprints.

The React Europe organization already picked up on this idea and is planning to do a sprint for its conference next year. I already got my tickets for 2016 and I'm looking forward to that! I hope more people in the JavaScript lingua franca community pick up on this idea.

The other lesson I learned is more personal. I used to be part of a larger open source community centered around Zope. But for various reasons that community stopped being creative and is slowly fading out; I wrote a blog series about my exit from Zope. I was a large player in the Zope community for quite a few years. In contrast I'm a bit player in the React community. But I greatly enjoy the community's creativity and the connections I'm making. I missed that in my open source life and I'm glad it's coming back.

The Emerging GraphQL Python stack

GraphQL is an interesting technology originating at Facebook. It is a query language that lets you get JSON results from a server. It's not a database system but can work with any kind of backend structure. It tries to solve the same issues traditionally solved by HTTP "REST-ish" APIs.

Some problems with REST

When you do a REST-ish HTTP API, you expose information about the server on a bunch of URLs. These URLs each return some data, typically JSON. You can also update the server using HTTP methods, such as POST, PUT and DELETE. The client-side code needs to know what URLs exist on the system and construct URLs based on what it wants to know. If your REST-ish HTTP API is also a proper REST API (aka a hypermedia API), you make sure that all information can actually be accessed without constructing URLs but by following links (or doing search requests) instead -- this is more loosely coupled but also more difficult to implement.

But REST-ish HTTP APIs have some problems:


Imagine you have person resources and address resources. If you have a UI on the client that shows a person's address, you will have to access both resources on separate URLs. This can easily add up to a lot of requests from the client to the server. This not only causes network traffic but can also make it harder to program the client, especially if you can only do a new request based on information you got in another response.

You can reduce this problem by embedding information -- a person resource has address information directly embedded in it. But there's no standard way to control what gets embedded and this makes the next issue worse.

too much information
In a HTTP API, you want to send out as much information about a resource as possible, even if a particular UI doesn't need it. This means that there is more network traffic, and possibly more work done on the server to generate the data even though it's not needed.
too little information
There is typically rather little machine-readable metadata that describes what the information on the server really exists. Having such information can really help with tooling, and this in turn can help avoid bugs. There are emerging specifications that tackle this, but they're not commonly used.

REST will be here to stay for the foreseeable future. There is also nothing inherent in REST that stops you from solving this -- I wrote about this in a previous blog entry. But meanwhile GraphQL has already solved much of this stuff, so at the very least is interesting to explore.


GraphQL introduces a query language that lets the client express what it really wants from the server. A single request with this query goes to the server, and the server comes back with a complete structure with everything that's needed for a particular state of the UI. To get person information with its address information embedded, you can write something like:

  person(id: 101) {
    address {

You get back JSON like:

    "fullname": "Bob Lasereyes',
    "address: {
      "street": "Laserstreet",
      "number": "77",
      "postalCode": "XYZQ",
      "city": "Super City",
      "country": "Mutantia"

Check the GraphQL readme for much more.

This solves the issues with RESTish HTTP APIs:

less spamminess
To represent a single UI state you can typically get away with doing just a single request to the server specifying everything you need. The server then gives you a single response.
the right amount of information
You only get the information you ask for, nothing more, nothing less.
enough meta information
The server has a schema (which tools can introspect) that describes exactly what kind of data you can access.


If you use GraphQL with the React UI library there's another project from Facebook you can use with it: Relay. Relay lets you declare what data you want (using GraphQL), co-locate GraphQL snippets with the bits of UI that need it, so your UIs are more composable and can be rearranged more easily, and has a sophisticated system to help with mutations, so that you display the updated information in the UI as quickly as possible without re-fetching too much data.

It's cool, it's just new, I want to explore it to see whether it can tackle some of my use cases and make life easier for developers.

On the server side

So Relay and GraphQL are interesting and cool. So what do we need to start using it? To use React with Relay on the client side to build UIs, we need a Relay-compliant GraphQL server.

Facebook released a reference implementation of GraphQL, in JavaScript: graphql-js. It also released a library to help make a GraphQL server Relay compliant, again in JavaScript: graphql-relay-js. It also released a server that exposes GraphQL over HTTP, again in JavaScript: express-graphql.

That's all very cool if your server is in JavaScript. But what if your server is in Python? Luckily the Facebook people anticipated this and GraphQL is not bound to JavaScript. See the GraphQL draft specification and the GraphQL Relay specification.

The Python GraphQL stack

Last week I started exploring the state of the GraphQL stack in Python on the server. I was very pleased to find that it was in good shape already:

  • graphqllib: this is an implementation of GraphQL by Taeho Kim with contributions by an emerging open source community around it. Lots of contributions are by Jake Heinz, who was also very helpful in discussions on the Slack chat (#python at
  • graphql-relay-py: an implementation of graphql-relay-js for Python by Syrus Akbary, so we can make our GraphQL Relay server more compliant.

The piece that was missing was actually using this stack as a backend for a React + Relay frontend. Was it mature enough to do this? I figured I'd give it a try. So I set out to port the one missing piece to Python, the HTTP web server. So I took express-graphql and ported over its code and tests to Python + WSGI using WebOb. The result is wsgi_graphql, a WSGI component that offers the same HTTP API as express-graphql.

It was a fun little exercise. I found a few issues in graphqllib while doing so, and they're fixed already. I even found a minor bug in express-graphql while doing so, which is fixed as well.

So does it work? Can you use React and Relay on the frontend with Python on the backend? I created a demo project, relaypy, that experimentally pulls all these pieces together. It exposes a GraphQL server with a Relay-compliant schema. I hooked up some simple React + Relay code on the frontend. It worked! In addition, I threw in a cool introspection/query UI that was created for GraphQL called GraphiQL. This works too!

Should you be using this stuff in the real world? No, not yet. There are big warning letters on the graphqllib project that it's highly experimental. But while it's all very early days for these components, but the Python support has come very far in just a few short months -- GraphQL was only released as a public project in July, and Relay is even younger. I expect that in a short time this stuff will be ready for production and we'll have a capable GraphQL stack in Python that we can use with React and Relay.

Bonus: Graphene

Emerging just last week as well was graphene, which a very new library by Syrus Akbary to make implementing GraphQL servers more Pythonic. The API offered by graphqllib is rather low-level, which is nice as it's very flexible, but for many Python projects you'd like to use something more Pythonic. Graphene promises to be that API.

Thoughts about React Europe

Last week I visited the React Europe conference in Paris; it was the first such event in Europe and the second React conference in the world. Paris like much of the rest of western Europe during this early July was insanely hot. The airconditioning at the conference location had trouble keeping up, and bars and restaurants were more like saunas. Nonetheless, much was learned and much fun was had. I'm glad I went!

React, in case you're not aware, is a frontend JavaScript framework. There are a lot of those. I wrote one myself years ago (before it was cool; I'm a frontend framework hipster) called Obviel. React appeals to me because it's component driven and because it makes so many complexities go away with its approach to state.

Another reason I really like React is because its community is so creative. I missed being involved with such a creative community after my exit from Zope, which was due in large part as it had become less creative. A lot of the web is being rethought by the React community. Whether all of those ideas are good remains to be seen, but it's certainly exciting and good will come from it.

Here is a summary of my experiences at the conference, with some suggestions for the conference organizers sprinkled in. They did a great job, but this way I hope to help them make it even better.


When I heard there would be a hackathon the day before the conference, I immediately signed up. This would be a great way to meet other developers in the React community, work on some open source infrastructure software together, and learn from them. Then a few days before travel I learned there was a contest and prizes. Contest? Prizes? I was somewhat taken aback!

I come from the tradition of sprints in the Python world. Sprints in the Python community originated with the Zope project, and go back to 2001. Sprints can be 1-3 day affairs held either before or after a Python conference. The dedicated sprint is also not uncommon: interested developers gather together somewhere for a few days, sometimes quite a few days, to work on stuff together. This can be a small group of 4 people in someone's offices, or 10 people in a converted barn in the woods, or 30 people in a castle, or even more people in a hotel on a snowy alpine mountain in the winter. I've experienced all of that and more.

What do people do at such sprints? People hack on open source infrastructure together. Beginners are onboarded into projects by more experienced developers. New projects get started. People discuss new approaches. They get to know each other, and learn from each other. They party. Some sprints have more central coordination than others. A sprint is a great way to get to know other developers and do open source together in a room instead of over the internet.

I previously thought the word hackathon to be another word for sprint. But a contest makes people compete with each other, and a sprint is all about collaboration instead.

Luckily I chatted a bit online before the conference and quickly enough found another developer who wanted to work with me on open source stuff, turning it into a proper sprint after all. We put together this little library as a result. I also met Dan Abramov. I'll get back to him later.

When I arrived at the beautiful Mozilla offices in Paris where the sprint was held, it felt like a library -- everybody was quietly nestled behind their own laptop. I was afraid to speak, though characteristically that didn't last long. I may have made a comment that I thought hackathons aren't supposed to be ibraries. We got a bit more noise after that.

I thoroughly enjoyed this sprint (as that is what it became after all), and learned a lot. Meanwhile the hackathon went well too for the three Dutch friends I traveled with -- they won the contest!

React Europe organizers, I'd like to request a bit more room for sprint-like collaboration at the next conference. In open source we want to emphasize collaboration more than competition, don't we?


The quality of the talks of the conference was excellent; they got me thinking, which I enjoy. I'll discuss some of the trends and list a few talks that stood out to me personally; my selection has more to do with my personal interests than the quality of the talks I won't mention, though.

Inline styles and animations

Michael Chan gave a talk about Inline Styles. React is about encapsulating bits of UI into coherent components. But styling was originally left up to external CSS, apart from the components. It doesn't have to be. The React community has been exploring ways to put style information into components as well, in part replacing CSS altogether. This is definitely a rethinking of best practices that will cause some resistance, but definitely very interesting. I will have to explore some of the libraries for doing this that have emerged in the React community; perhaps they will fit my developer brain better than CSS has so far.

There were two talks about how you might define animations as well with React. I especially liked Cheng Lou's talk where he explored declarative ways to express animations. Who knows, maybe even unstylish programmers like myself will end up doing animation!

GraphQL and Relay

Lee Byron (creator of immutable-js) gave a talk about GraphQL. GraphQL is rethinking client/server communication originating at Facebook. Instead of leaving it up to the web server to determine the shape of the data the client code sees, GraphQL lets that be expressed by the client code. The idea is that the developer of the client UI code knows better that data they need than the server developer does (even if these are the same person). This has some performance benefits as well as it can be used to minimize network traffic. Most important to be me is that it promises a better way of client UI development: the data arrives in the shape the client developer needs already.

Lee announced the immediate release of a GraphQL introduction, GraphQL draft specification and a reference implementation in JavaScript, resolving a criticism I had in a previous blog post. I started reading the spec that night (I had missed out on the intro; it's a better place to start!).

Joseph Savona gave a talk about the Relay project, which is a way to integrate GraphQL with React. The idea is to specify what data a component needs not only on the client, but right next to the UI components that need it. Before the UI is rendered, the required data is composed into a larger GraphQL query and the data is retrieved. Relay aims to solve a lot of the hard parts of client/server communication in a central framework, making various complexities go away for UI developers. Joseph announced an open source release of Relay for around August. I'm looking forward to learn more about Relay then.

Dan Schafer and Nick Schrock gave a talk about what implementing a GraphQL server actually looks like. GraphQL is a query language, not a database system. It is designed to integrate with whatever backend services you already have, not replace them. This is good as it requires much less buy-in and you can evolve your systems towards GraphQL incrementally -- this was Facebook's own use case as well. To expose your service's data as GraphQL you need to give a server GraphQL framework a description of what your server data looks like and some code on how to obtain this data from the underlying services.

Both Dan and Nick spent the break after their talk answering a lot of questions by many interested developers, including myself. I spoke to Dan myself and I'm really grateful for all his informative answers.

The GraphQL and Relay developers at Facebook are explicitly hoping to build a real open source community around this technology, and they made a flying start this conference.


All this GraphQL and Relay stuff is exciting, but the way most people integrate React with backends at present is through variations on the Flux pattern. There were several talks that touched upon Flux during the conference. The talk that stood out was by Dan Abramov, who I mentioned earlier. This talk has already been released as an online video, and I recommend you watch it. In it Dan develops and debugs a client-side application live, and due to the ingenious architecture behind it, he can modify code and see the changes in the application's behavior immediately, without an explicit reload and without having to reenter data. It was really eye-opening.

What makes this style of development possible is a more purely functional approach towards the Flux pattern. Dan started the Redux framework, which focuses on making this kind of thing possible. Instead of definining methods that describe how to store data in some global store object, in Redux you define pure functions instead (reducers) that describe how to transform the store state into a new state.

Dan Abramov is interesting by himself. He has quickly made a big name for himself in the React community by working on all sorts of exciting new software and approaches, while being very approachable at the same time. He's doing open source right. He's also in his early 20s. I'm old enough to have run into very smart younger developers before, so his success is not too intimidating for me. I'll try to learn from what he does right and apply it in my own open source work.

The purely functional reducer pattern was all over the conference; I saw references to it in several other talks, especially Kevin Robinson's talk on simplifying the data layer, which explored the power of keeping a log of actions. It has its applications on both clients and servers.

The React web framework already set the tone: it makes some powerful functional programming techniques surrounding UI state management available in a JavaScript framework. The React community is now mining more functional programming techniques, making them accessible to JavaScript developers. It's interesting times.

Using React's component nature

There were several talks that touch on how you can use React's component nature. Ryan Florence gave an entertaining talk about you can incrementally rewrite an existing client-side application to use React components, step by step. Aria Buckles gave a talk on writing good reusable components with React; I recognized several mistakes I've made in my code and better ways to do it.

Finally in a topic close to my heart Evan Morikawa and Ben Gotow gave a talk about how to use React and Flux to turn applications into extensible platforms. Extensible platforms are all over software development. CMSes are a very common example in web development. One could even argue having an extensible core that supports multiple customers with customizations is the mystical quality that turns an application into "enterprise software".

DX: Developer Experience

The new abbreviation DX was used in a lot of talks. DX stands for "Developer Experience" in analogy with UX standing for "user experience".

I've always thought about this concept as usability for developers: a good library or framework offers a good user interface for developers who want to build stuff with it. A library or framework isn't there just to let developers get some done, but to let them get this stuff done well: smoothly, avoiding common mistakes, and not letting them down when they need to do something special.

I really appreciated the React community's emphasis on DX. Let's make the easy things easy, and the hard things possible, together.

Gender Diversity

This section is not intended as devastating criticism but as a suggestion. I'm not an expert on this topic at all, but I did want to make this observation.

I've attended a lot of Python conferences over the years. The gender balance at these conferences was in the early years much more like the React Europe conference: mostly men with a few women here and there. But in recent years there has been a big push in the Python community, especially in North America, to change the gender balance at these conferences and the community as a whole. With success: these days PyCons in North America attract over 30% women attendees. While EuroPython still has a way to go, last year I already noticed a definite trend towards more women speaking and attending. It was a change I appreciated.

Change like this doesn't happen by itself. React Europe made a good start by adopting a code of conduct. We can learn more about what other conference organizers do. Closer to the React community I've also appreciated the actions of the JSConf Europe organizers in this direction. Some simple ideas include to actively reach out to women to submit their talk and to reach out to user groups.

Of course, for all I know this was in fact done, in which case do please do keep it up! If not, that's my suggestion.


I really enjoyed this conference. There were a lot of interesting talks; too many to go into here. I met a lot of interesting people. Mining functional programming techniques to benefit mere mortal developers such as myself, and developer experience in general, are clearly major trends in the React community.

Now that I'm home I'm looking forward to exploring these ideas and technologies some more. Thank you organizers, speakers and fellow attendees! And then to think the conference will likely get even better next year! I hope I can make it again.