Morepath 0.13 now with Dectate

We just released Morepath 0.13 (changes). Morepath is your friendly neighborhood Python web framework with super powers, and with 0.13 it has gained a significant power upgrade.

This is the first Morepath release of 2016 and the biggest Morepath release in a while. The major change in Morepath 0.13 is that it is now build on the Dectate meta-framework for configuration.

Morepath's configuration system is finally documented in the form of Dectate. Developers can extend Morepath with new configuration directives and new configuration registries and they behave exactly like the native ones. They're built the same way.

Dectate offers powerful features that I believe takes Morepath's decorator-based configuration system far beyond what you can do with most other web frameworks, which typically use a Python file for configuration, or use an ad-hoc decorator-based system. Too bad almost nobody seems to realize how much power this brings to the developer... A query tool for configuration, for instance.

The only framework with an equivalent system is Pyramid, but I think Morepath still has some features it does not: Morepath allows multiple independent configurations in the same run-time, for instance.

With the introduction of Dectate we've dropped Morepath's dependency on Venusian. Venusian was certainly valuable to Morepath, but over time we started to have some issues with it: its requirement to scan Python code was a barrier for beginners, and in some cases it could impact performance.

Dectate does not require scanning of packages in order to find registrations, but it can certainly be handy to be able to do so, as you can't miss any stray decorators in modules you didn't import anywhere else. Morepath now supports it through the new importscan dependency. importscan defines a recursive import function extracted from Venusian.

Dectate: advanced configuration for Python code

Dectate is a new Python library. It's geared towards framework authors. It's a meta-framework: a framework you can use to easily construct robust and powerful Python frameworks.

So what's a framework anyway? A framework is a system that you supply with code and then it calls it at the appropriate times. Don't call us, we'll call you!

What does this look like in practice? Let's imagine you're building a web framework, and you want the people that use your framework to provide routes and functions that generate responses for those routes:

@route('/foo')
def foo_view(request):
    return "Some response!"

This hypothetical web framework then interprets HTTP requests, matches the path of the query with /foo, and then calls the function foo_view to generate the response. Once the response is generated, it sends it back as a HTTP response.

In the abstract, the developer that uses the framework uses it for code configuration: you supply some functions or classes along with some configuration meta data. The framework then uses this code at the appropriate times.

So why would you, the framework author, need a meta framework to implement route? You just create a Python decorator. When it's called you just register the path and the function with some global registry somewhere. Yeah, yeah, "you just", we have heard that before. You could indeed just do that, but perhaps you want more:

  • What if the developer that uses your framework uses route('/foo') in two places? Which one to pick? Does the last one registered win or should this be an error? If the framework should pick the last one, what is the last one? Does this depend on import order?

  • What if there's an error? What if there is some configuration conflict or perhaps your framework decides the developer passed in bad meta data? Ideally you'd like to tell the developer that uses your framework exactly what decorators where have the problem.

  • Perhaps you want to allow reuse: a developer can define a whole bunch of routes and then extend them with some extra routes for particular use cases.

  • Perhaps you want to allow overrides: a developer can define a whole bunch of routes but then override specific ones for particular use cases.

  • Perhaps you want your framework to be extensible with new decorators and new registries. How do you allow this in a way that still allows reuse, overrides and error reporting?

Dectate takes care of all that stuff. It is a documented and well-tested library, and it works for Python 2 and Python 3 code.

Dectate is a spin-off from the Morepath web framework. Morepath is great and you should use it. Morepath has had a sophisticated configuration framework for some years now, but it had grown new features over time, which resulted in a bit of cruft and it also was not well documented. To remedy that and make some other improvements, I've spun it into its own independent library now: Dectate. You can read more about Dectate's history here; Dectate is an expression of many lessons learned over a long time.

It is my hope that Dectate goes beyond Morepath and will be considered by other framework authors. Maybe someone will create a Dectate-based configuration system for other web frameworks such as Django or Flask or Pyramid. Or perhaps someone will use Dectate for some new framework altogether, perhaps one not at all related to the web. Maybe you will! Let me know.

JavaScript Dependencies Revisited: An Example Project

Introduction

A few years ago I wrote an article on how I was overwhelmed by JavaScript dependencies. In it I explored the difficulty of managing dependencies in a JavaScript project: both internal modules as well as depending on external packages. There were a ton of options available, and none of them seemed to entirely fit what I wanted. I followed up on it going into the problem in more depth. Typical for JavaScript there were a lot of different solutions for this problem.

The article on how I was overwhelmed is consistently one of the most read articles on this blog, even though the overwhelming majority of posts over the years is actually on Python. Apparently a lot of people are overwhelmed by JavaScript dependency management.

It's time for an update. A few years in JavaScript time is like 10 years in normal years, anyway. Everything changed like five times over since then.

What changed in JavaScript

So let's go through some of the changes:

One of the most important changes is that JavaScript now has a standard way to do imports in the ES6 version of the language, and that people are using it in practice, through transpilers like Babel.

Another change is that using npm and CommonJS packages has been emerging as the most popular way to do client-side dependency management, after being the dominant tool on the server already. In fact, back in 2013 people were already suggesting I use npm and a bundling tool (like Browserify), and I was resistant then. But they were right. In any case, it was already clear then that CommonJS was one of the most structured ways to do dependencies, and it's therefore no surprise this lead to a great tooling ecosystem.

Source map support in browsers has also matured. One of the reasons I was resistant to a compile-time step is that debugging using the browser introspector becomes more difficult. Now that source maps are pretty well established, that is less of a problem. It's still not as good as debugging code that doesn't need a compilation step, but it's tolerable.

While I'd like to be able to do without a compilation step, I need performance, until the adoption of HTTP 2 makes this less of a concern. I want to use ES6 and JSX anyway, so some kind of compilation step cannot be avoided anyway.

A Bundling Example

Last week I talked to Timo Stollenwerk about bundling tools. He asked me to put a little example project together. So I created one: it does bundling through Webpack, lets you use modern ES6 JavaScript through Babel and has eslint support.

There are a ton of example JavaScript projects out there already. A lot of them have quite a bit of JavaScript code in them to drive tools like gulp or grunt -- something I don't really like. I prefer declarative configuration and reusable libraries, not custom JavaScript code I need to copy into my project. These projects also tend to have quite a bit of code and little documentation that tells you what's going on.

While creating my example project, I went a bit overboard on the README. So this example is the opposite of many others: a lot of documentation and very little code. In the README, I go step by step explaining how to set up a modern client-side JavaScript development environment.

The choices of tools in the project are my own -- they're stuff that I've found works well together and is simple enough for me to understand. Many alternatives are possible.

I hope it is of help to anyone! Enjoy!

https://github.com/faassen/bundle_example

The Incredible Drifting Cyber

Introduction

It's been interesting how the prefix cyber has drifted in meaning over the years. Let's explore together.

I wrote two thirds of this article and then I discovered Annalee Newitz was way ahead of me and wrote about the same thing two years ago. Since my article has different details I decided to finish it and put it on my blog after all. There's plenty of room in cyberspace. But read Annalee Newitz's article too!

Ancient Times: κυβερνητική

The ancient Greeks had a bunch of "kybern-" words. κυβερνάω (kybernao) means "to steer", and the Greek words for ship's captain and government are related. κυβερνητική (kybernetike) was used by Plato to mean governance of people.

So kybern- stuff was about steering and governance.

Cybernetics

In 1948 Norbert Wiener coined the word "cybernetics" in English based on the Greek word κυβερνητική. Wiener was a mathematician who worked on the automatic aiming of anti-aircraft guns during World War II. Wiener started to think about the general principles of control systems. I appreciate how he extracted some good from thoughts about guns.

A very simple control system most people are familiar with is a thermostat: when the temperature falls below a certain set value, it turns on a heater until the temperature is back at the required value again. We find many more complex control processes in living organisms: such blood sugar regulation as body temperature regulation.

Wiener called the study of control systems cybernetics, and investigated general principles behind control systems in a lot of different areas: from electronics to biology to psychology. He foresaw a lot of later developments in computer technology. Interdisciplinary thought like this can be very fruitful.

Wiener's work on cybernetics was quite inspirational in a range of fields, causing the meaning of the word "cybernetics" to become as stretched as "chaos theory" was for a while in the 1990s. Such is the risk of interdisciplinary studies.

Cyborg

At first we didn't have the cyber prefix. We just had cyb.

We move into the space age. In 1960, two researchers, Manfred Clynes and Nathan Kline, also known as the Klynes (okay I just made that up), published an article about the idea of adapting human bodies with cybernetic technology to make them more suitable for the environment of space. With the right combination of drugs and machinery we could adjust humans so they can deal with long duration space voyages. They named the resulting combined cybernetic organism a "cyborg".

They realized that such a cyborg might go mad floating in the vastness of space:

Despite all the care exercised, there remains a strong possibility that somewhere in the course of a long space voyage a psychotic episode might occur, and this is one condition for which no servomechanism can be completely designed at the present time.

Since the cyborg might refuse to take medication voluntarily, they can still pump them full of drugs remotely:

For this reason, if monitoring is adequate, provision should be made for triggering administration of the medication remotely from earth or by a companion if there is a crew on the vehicle.

It sure was a document of its time, including "space race" reference to possible competing Soviet research into the cyborg topic. Oh no, they might be ahead of us! The article concluded that:

Solving the many technological problems involved in manned space flight by adapting man to his environment, rather than vice versa, will not only mark a significant step forward in man's scientific progress, but may well provide a new and larger dimension for man's spirit as well.

Today we see many examples of what could be described as "cyborg" technology, though we aren't taking it as quite far as these researchers imagined yet. We don't have the technology.

Cyber in Science Fiction

The idea of the the human/machine hybrid predates World War II in science fiction, but these researchers gave it a name that stuck.

This is where the cyber prefix starts entering pop culture. Doctor Who in 1966 introduced the Cybermen, biological organisms that have replaced most of their bodies with cybernetic parts. They're cyborgs, and nasty ones: they proceed to forcibly convert victims into more Cybermen. In the 1980s a similar concept was introduced into Star Trek as the Borg. Just as Star Wars turned the older word "Android" into "Droid", Star Trek turned "Cyborg" into "Borg".

So cyber is about cybernetic organisms. Not all of it though: Cybernetics crosses into many disciplines, so it was easy for cyber to become associated with computers and robots as well: Cybertron is the home world of giant robots that can transform into stuff. It involves lots of explosions somehow. Or Cyberdyne systems, who create Skynet. It creates a Governator (government again!) that is sent back in time.

Cyberpunk and Cyberspace

But we're getting ahead of ourselves. In the early 1980s, the prefix cyber appears in a new subgenre in science fiction, Cyberpunk. Gone are the gleaming towers, the distant worlds and silver bodysuits of earlier science fiction imagery. Cyberpunk is "high tech and low life" -- the radical collision of high technology with the street. We know this today as today, though we're a lot less cool with our smartphones than the mirror-shaded cyber-implanted street toughs envisioned by Cyberpunk fiction.

The seminal work of Cyberpunk fiction is Neuromancer by William Gibson, from 1984. In it Gibson coins the word Cyberspace, which doesn't have boring HTML but instead uses virtual reality to navigate data, as that's just so much cooler.

Cyber was now associated with digital spaces. The Internet was coming. Now the floodgates are open and we're ready for the 1990s.

Riding the 1990s Cyber Highway

The Internet has a longer history, but it really speeds into popular consciousness in the early 1990s. One day it's all boring computer stuff nobody cares about except a few geeks like me, the next day you hear people exchange their email address in the bus. Journalists and academics, never afraid to write articles about neat new stuff they can play around with at work (and why not?) produce massive quantities of new words with the prefix cyber. Cyber is cool.

The Internet is bright and new. Nobody has heard of spam yet, and email viruses are still a hoax.

So we are homesteading the cyberfrontier, found new cybercorporations. Are we building a cyberutopia or a cyberghetto with a cyberelite? Will we one day all ascend into cyberimmortality?

You see where this is going. By 1999 people are calling the prefix "terminally overused". Cyber is now uncool.

To Cyber

The cyber prefix then takes a surprising turn and turns into a full-fledged word all by itself! In the late 1990s cyber becomes a verb: to have cybersex in online chat. "Wanna cyber?" Words like "cyberutopia" and "cyberelite" can now elicit a snicker.

It was not to last, though we should still pretend it is, as cyber is about to take a dark turn.

Cyber Turns Dark

Apparently blissfully unaware of the naughty meaning of the word, the cyber prefix has in recent years become re-purposed by serious people for bad stuff that happens online: cybercrime, cyberbullying, cybersecurity, cyberwar. Or maybe it is the other way around, and only with enough distance from fun naughty things can the prefix cyber still hold a useful meaning, and it's such an unfun word now exactly because there was an association with naughty stuff previously.

And after all, we've associated dehumanizing technology with bad stuff in science fiction since at least Frankenstein's Monster, and the prefix cyber has been used in that context for years. Cybermen are still bad guys.

This is a dark turn for cyber. I don't like living in a world of cybersecurity. I imagine at a cybersecurity conference overly serious people discuss about how to take more privacy away from citizens, in the shadow of ominous threats they don't quite understand. Unless they are busting into hotel rooms while you're nude. (really. In my own country, no less!)

Will the word cyber remain dark now that the dour people have clenched their fists around it? The word has been versatile so far, but I'm not optimistic. In any case we do need to get some of that 1990s cyberspirit back, and then we can perhaps, again, work on that 1960s new and larger dimension for the human spirit. Or at least have fun.

A Brief History of Reselect

Introduction

Reselect is a library with a very brief history that until a few days ago was sitting on my personal Github account. What does it do? It's not very relevant for this blog post, as I'm more interested in its history as an open source project. But briefly: it's a JavaScript library. It lets you automatically cache expensive calculations in a client-side web application.

Reselect was born in early July of this year. Since then, in less than four months, it gained more than 700 stars on Github. To compare: I have another project that I spent a few years working on and promoting. I blogged about it, spoke about it at two conferences, and at a meetup or two. It has less than 200 stars.

So I thought it might be instructive and amusing to trace the brief history of this little library.

Context: JavaScript

Reselect is a JavaScript library. JavaScript has been a very fast moving space these last few years. The JavaScript community moves so quickly people make jokes about it: new JavaScript MVC frameworks get invented every day. I actually created my own way back before it was cool; I'm a JavaScript MVC framework hipster.

Tooling for both client and server JavaScript changes very quickly. In the last year have we seen the emergence of ES6, also known as ES2015, which only got formally ratified as a standard last June. It's a huge update to the JavaScript language.

JavaScript is in the interesting position of being a lingua franca: it's a language many developers learn, coming from a wide range of different programming communities. Ideas and values from these different programming communities get adopted into JavaScript. One of the most interesting results of this has been the adoption of functional programming techniques into JavaScript.

I actually find myself learning about more advanced functional programming techniques now because I am writing a front-end web application. This is cool.

Context: React

React is a JavaScript library that came out of Facebook. It lets you build rich client-side web applications. It's good at doing UI stuff. I like it.

The React community is very creative. React rethought how client-side UIs works, and now people are rethinking all kinds of stuff. Some of these new thoughts will turn out to be bad ones. That's normal for creativity. We're learning stuff.

When you have a UI you have to manage the UI state: UI state has to be kept in sync with state on the server somehow. A UI may need to be updated in several places at once.

Last year Facebook presented this new way to managestate they called Flux. It's inspired by some functional programming ideas. Interestingly enough they didn't release code at first, just talked about the basic idea. As a result in typical JavaScript fashion a thousand Flux implementations bloomed.

Then in May of this year, Dan Abramov announced on Twitter "Oh no am I really writing my own Flux library".

Context: Redux

A few words about Dan Abramov. He's smart. He's creative. He's approachable. He's intimidatingly good at open source, even though he's been around for only a short time. He connected to everybody on Twitter, spreading knowledge around. He built very useful libraries.

So in a few short months, at typical JavaScript speed, Redux turned from "yet another Flux library" to what's probably the most popular Flux implementation around today. Dan rethought the way Flux should work, applied some more functional programming concepts, created awesome tooling, and people liked it.

I give it away early, but the rise of Redux explains why Reselect became so popular so quickly: it's riding the coattails of Redux. Dan Abramov promoted it with his Twitter megaphone, and included a reference to it in the Redux documentation as a way to solve a particular class of performance problems.

Reselect prehistory

So how did Reselect come about?

In early June, the problem was identified in the Redux issue tracker.

In late June Robbert Binna came out with a pull request:

The birth of Reselect

In the beginning of July, the first React Europe conference was held in Paris. I had signed up to the Hackathon to be held one day before the conference. I was almost more excited about this than the conference itself, because I have very good experiences with collaborative software development meetups. You can really get to know other developers, learn a lot, and build cool new open source infrastructure stuff. It lets you get out of the details of your day to day project and step back and solve deeper problems in a good way.

But a few days before my departure I learned to my dismay that this Hackathon was supposed to be a contest for a prize, not a collaborative hacking session. I grumbled about this on the Redux chat channel. I asked whether anyone was interested in hacking on infrastructure stuff instead.

Luckily Lee Bannard (AKA ellbee) replied and said he was! So we arranged to meet up.

So on the day of the Hackathon the two of us sat outside of the main room where everybody was focused on the contest. Lee had the idea to work on this calculation caching issue. I was a complete noob about this particular problem, so he explained it to me. He had experience with how NuclearJS tackled the problem, and knew about the pull request for Redux.

We also met Dan Abramov there, so we could talk to him directly. He was reluctant to add new facilities to Redux itself, especially as we were still exploring the problem space. So we decided to create a new library for it that could be used with Redux: Reselect.

We sat down and carefully examined the pull request. At first I was "this is all wrong" but the longer we looked I realized the code was actually quite right and I was wrong: it was elegant, doing what was needed in a few lines of code.

We refactored the code a little, making it more generic. We cleaned up the API a bit. We also added automated tests. Along the way we needed to check it in somewhere so I put it on my personal Github page. At the end of the day we had a pretty nice little library.

I enjoyed the conference a lot, went home and wrote a report.

Reselect afterwards

I got busy with work. While I glanced at Reselect once every while I had no immediate use for it in my project. I quickly gave Lee full access to the project so I wasn't going to block anything. Lee Bannard proved a very capable maintainer. He also added a lot of documentation. Dan used his megaphone, and Reselect got quite a few users quickly, and a lot of small contributions. The official Redux documentation points to Reselect.

And as my work project progressed, I found out I did have a use case for it after all, and started using it.

I did not create Reselect

Since it was on my Github account, people naturally associated me with the project, and assumed it was mine. While that's very good for my name recognition, it didn't sit right with me. I spent a day helping it into existence, and a few hours more afterwards, but more credit should go to others. It's very much a collaborative project.

I brought this up with Lee a few times. He insisted repeatedly he did not mind it being on my Github account at all. Now I think he's awesome!

To Rackt

But last week, as people told me explicitly "hey, I use your Reselect library!" I figured I should do something about it. Redux had already moved into the Rackt project to make it a true community project. Rackt is a kind of curated collection of React libraries, and a community group that maintains them. Technically there is not much difference to hosting the project on my Github account, but such details do matter: if you want community contributions in an open source project, it helps to show that it's a genuine community project.

So I proposed we should move Reselect too, and people agreed it would be a good idea. It turned out the Rackt folks were happy to accept Reselect, so a few Tweets later the deed was done and Reselect now lives in its new home.

Thoughts

What lessons can we learn from this?

Collaborative hacking sessions are a good thing. They're common in the Python world and called "sprints". They're regularly organized in association with Python conferences. At React Europe we had a micro sprint: just two people, myself and Lee, working on infrastructure code, with a bit of encouragement from Dan. Imagine what could be accomplished with a larger group of people?

I admit the circumstances were special: Lee picked an idea that was ready and could be done in a day. It also might have helped I have previous sprint experience. Not every sprint has the same results. But at the very least people get to know each other and learn new things. I think a collaborative goal works better for this than a contest. One major reason we go to open source conferences after all is to meet new people and see old friends again, and a sprint can help with that. I've made life-long friends through sprints.

The React Europe organization already picked up on this idea and is planning to do a sprint for its conference next year. I already got my tickets for 2016 and I'm looking forward to that! I hope more people in the JavaScript lingua franca community pick up on this idea.

The other lesson I learned is more personal. I used to be part of a larger open source community centered around Zope. But for various reasons that community stopped being creative and is slowly fading out; I wrote a blog series about my exit from Zope. I was a large player in the Zope community for quite a few years. In contrast I'm a bit player in the React community. But I greatly enjoy the community's creativity and the connections I'm making. I missed that in my open source life and I'm glad it's coming back.

The Emerging GraphQL Python stack

GraphQL is an interesting technology originating at Facebook. It is a query language that lets you get JSON results from a server. It's not a database system but can work with any kind of backend structure. It tries to solve the same issues traditionally solved by HTTP "REST-ish" APIs.

Some problems with REST

When you do a REST-ish HTTP API, you expose information about the server on a bunch of URLs. These URLs each return some data, typically JSON. You can also update the server using HTTP methods, such as POST, PUT and DELETE. The client-side code needs to know what URLs exist on the system and construct URLs based on what it wants to know. If your REST-ish HTTP API is also a proper REST API (aka a hypermedia API), you make sure that all information can actually be accessed without constructing URLs but by following links (or doing search requests) instead -- this is more loosely coupled but also more difficult to implement.

But REST-ish HTTP APIs have some problems:

spamminess

Imagine you have person resources and address resources. If you have a UI on the client that shows a person's address, you will have to access both resources on separate URLs. This can easily add up to a lot of requests from the client to the server. This not only causes network traffic but can also make it harder to program the client, especially if you can only do a new request based on information you got in another response.

You can reduce this problem by embedding information -- a person resource has address information directly embedded in it. But there's no standard way to control what gets embedded and this makes the next issue worse.

too much information

In a HTTP API, you want to send out as much information about a resource as possible, even if a particular UI doesn't need it. This means that there is more network traffic, and possibly more work done on the server to generate the data even though it's not needed.

too little information

There is typically rather little machine-readable metadata that describes what the information on the server really exists. Having such information can really help with tooling, and this in turn can help avoid bugs. There are emerging specifications that tackle this, but they're not commonly used.

REST will be here to stay for the foreseeable future. There is also nothing inherent in REST that stops you from solving this -- I wrote about this in a previous blog entry. But meanwhile GraphQL has already solved much of this stuff, so at the very least is interesting to explore.

GraphQL

GraphQL introduces a query language that lets the client express what it really wants from the server. A single request with this query goes to the server, and the server comes back with a complete structure with everything that's needed for a particular state of the UI. To get person information with its address information embedded, you can write something like:

{
  person(id: 101) {
    fullname,
    address {
      street
      number
      postalCode
      city
      country
    }
  }
}

You get back JSON like:

{
  "person":
    "fullname": "Bob Lasereyes',
    "address: {
      "street": "Laserstreet",
      "number": "77",
      "postalCode": "XYZQ",
      "city": "Super City",
      "country": "Mutantia"
  }
}

Check the GraphQL readme for much more.

This solves the issues with RESTish HTTP APIs:

less spamminess

To represent a single UI state you can typically get away with doing just a single request to the server specifying everything you need. The server then gives you a single response.

the right amount of information

You only get the information you ask for, nothing more, nothing less.

enough meta information

The server has a schema (which tools can introspect) that describes exactly what kind of data you can access.

Relay

If you use GraphQL with the React UI library there's another project from Facebook you can use with it: Relay. Relay lets you declare what data you want (using GraphQL), co-locate GraphQL snippets with the bits of UI that need it, so your UIs are more composable and can be rearranged more easily, and has a sophisticated system to help with mutations, so that you display the updated information in the UI as quickly as possible without re-fetching too much data.

It's cool, it's just new, I want to explore it to see whether it can tackle some of my use cases and make life easier for developers.

On the server side

So Relay and GraphQL are interesting and cool. So what do we need to start using it? To use React with Relay on the client side to build UIs, we need a Relay-compliant GraphQL server.

Facebook released a reference implementation of GraphQL, in JavaScript: graphql-js. It also released a library to help make a GraphQL server Relay compliant, again in JavaScript: graphql-relay-js. It also released a server that exposes GraphQL over HTTP, again in JavaScript: express-graphql.

That's all very cool if your server is in JavaScript. But what if your server is in Python? Luckily the Facebook people anticipated this and GraphQL is not bound to JavaScript. See the GraphQL draft specification and the GraphQL Relay specification.

The Python GraphQL stack

Last week I started exploring the state of the GraphQL stack in Python on the server. I was very pleased to find that it was in good shape already:

  • graphqllib: this is an implementation of GraphQL by Taeho Kim with contributions by an emerging open source community around it. Lots of contributions are by Jake Heinz, who was also very helpful in discussions on the Slack chat (#python at https://graphql-slack.herokuapp.com/).

  • graphql-relay-py: an implementation of graphql-relay-js for Python by Syrus Akbary, so we can make our GraphQL Relay server more compliant.

The piece that was missing was actually using this stack as a backend for a React + Relay frontend. Was it mature enough to do this? I figured I'd give it a try. So I set out to port the one missing piece to Python, the HTTP web server. So I took express-graphql and ported over its code and tests to Python + WSGI using WebOb. The result is wsgi_graphql, a WSGI component that offers the same HTTP API as express-graphql.

It was a fun little exercise. I found a few issues in graphqllib while doing so, and they're fixed already. I even found a minor bug in express-graphql while doing so, which is fixed as well.

So does it work? Can you use React and Relay on the frontend with Python on the backend? I created a demo project, relaypy, that experimentally pulls all these pieces together. It exposes a GraphQL server with a Relay-compliant schema. I hooked up some simple React + Relay code on the frontend. It worked! In addition, I threw in a cool introspection/query UI that was created for GraphQL called GraphiQL. This works too!

Should you be using this stuff in the real world? No, not yet. There are big warning letters on the graphqllib project that it's highly experimental. But while it's all very early days for these components, but the Python support has come very far in just a few short months -- GraphQL was only released as a public project in July, and Relay is even younger. I expect that in a short time this stuff will be ready for production and we'll have a capable GraphQL stack in Python that we can use with React and Relay.

Bonus: Graphene

Emerging just last week as well was graphene, which a very new library by Syrus Akbary to make implementing GraphQL servers more Pythonic. The API offered by graphqllib is rather low-level, which is nice as it's very flexible, but for many Python projects you'd like to use something more Pythonic. Graphene promises to be that API.

Thoughts about React Europe

Last week I visited the React Europe conference in Paris; it was the first such event in Europe and the second React conference in the world. Paris like much of the rest of western Europe during this early July was insanely hot. The airconditioning at the conference location had trouble keeping up, and bars and restaurants were more like saunas. Nonetheless, much was learned and much fun was had. I'm glad I went!

React, in case you're not aware, is a frontend JavaScript framework. There are a lot of those. I wrote one myself years ago (before it was cool; I'm a frontend framework hipster) called Obviel. React appeals to me because it's component driven and because it makes so many complexities go away with its approach to state.

Another reason I really like React is because its community is so creative. I missed being involved with such a creative community after my exit from Zope, which was due in large part as it had become less creative. A lot of the web is being rethought by the React community. Whether all of those ideas are good remains to be seen, but it's certainly exciting and good will come from it.

Here is a summary of my experiences at the conference, with some suggestions for the conference organizers sprinkled in. They did a great job, but this way I hope to help them make it even better.

Hackathon

When I heard there would be a hackathon the day before the conference, I immediately signed up. This would be a great way to meet other developers in the React community, work on some open source infrastructure software together, and learn from them. Then a few days before travel I learned there was a contest and prizes. Contest? Prizes? I was somewhat taken aback!

I come from the tradition of sprints in the Python world. Sprints in the Python community originated with the Zope project, and go back to 2001. Sprints can be 1-3 day affairs held either before or after a Python conference. The dedicated sprint is also not uncommon: interested developers gather together somewhere for a few days, sometimes quite a few days, to work on stuff together. This can be a small group of 4 people in someone's offices, or 10 people in a converted barn in the woods, or 30 people in a castle, or even more people in a hotel on a snowy alpine mountain in the winter. I've experienced all of that and more.

What do people do at such sprints? People hack on open source infrastructure together. Beginners are onboarded into projects by more experienced developers. New projects get started. People discuss new approaches. They get to know each other, and learn from each other. They party. Some sprints have more central coordination than others. A sprint is a great way to get to know other developers and do open source together in a room instead of over the internet.

I previously thought the word hackathon to be another word for sprint. But a contest makes people compete with each other, and a sprint is all about collaboration instead.

Luckily I chatted a bit online before the conference and quickly enough found another developer who wanted to work with me on open source stuff, turning it into a proper sprint after all. We put together this little library as a result. I also met Dan Abramov. I'll get back to him later.

When I arrived at the beautiful Mozilla offices in Paris where the sprint was held, it felt like a library -- everybody was quietly nestled behind their own laptop. I was afraid to speak, though characteristically that didn't last long. I may have made a comment that I thought hackathons aren't supposed to be ibraries. We got a bit more noise after that.

I thoroughly enjoyed this sprint (as that is what it became after all), and learned a lot. Meanwhile the hackathon went well too for the three Dutch friends I traveled with -- they won the contest!

React Europe organizers, I'd like to request a bit more room for sprint-like collaboration at the next conference. In open source we want to emphasize collaboration more than competition, don't we?

Conference

The quality of the talks of the conference was excellent; they got me thinking, which I enjoy. I'll discuss some of the trends and list a few talks that stood out to me personally; my selection has more to do with my personal interests than the quality of the talks I won't mention, though.

Inline styles and animations

Michael Chan gave a talk about Inline Styles. React is about encapsulating bits of UI into coherent components. But styling was originally left up to external CSS, apart from the components. It doesn't have to be. The React community has been exploring ways to put style information into components as well, in part replacing CSS altogether. This is definitely a rethinking of best practices that will cause some resistance, but definitely very interesting. I will have to explore some of the libraries for doing this that have emerged in the React community; perhaps they will fit my developer brain better than CSS has so far.

There were two talks about how you might define animations as well with React. I especially liked Cheng Lou's talk where he explored declarative ways to express animations. Who knows, maybe even unstylish programmers like myself will end up doing animation!

GraphQL and Relay

Lee Byron (creator of immutable-js) gave a talk about GraphQL. GraphQL is rethinking client/server communication originating at Facebook. Instead of leaving it up to the web server to determine the shape of the data the client code sees, GraphQL lets that be expressed by the client code. The idea is that the developer of the client UI code knows better that data they need than the server developer does (even if these are the same person). This has some performance benefits as well as it can be used to minimize network traffic. Most important to be me is that it promises a better way of client UI development: the data arrives in the shape the client developer needs already.

Lee announced the immediate release of a GraphQL introduction, GraphQL draft specification and a reference implementation in JavaScript, resolving a criticism I had in a previous blog post. I started reading the spec that night (I had missed out on the intro; it's a better place to start!).

Joseph Savona gave a talk about the Relay project, which is a way to integrate GraphQL with React. The idea is to specify what data a component needs not only on the client, but right next to the UI components that need it. Before the UI is rendered, the required data is composed into a larger GraphQL query and the data is retrieved. Relay aims to solve a lot of the hard parts of client/server communication in a central framework, making various complexities go away for UI developers. Joseph announced an open source release of Relay for around August. I'm looking forward to learn more about Relay then.

Dan Schafer and Nick Schrock gave a talk about what implementing a GraphQL server actually looks like. GraphQL is a query language, not a database system. It is designed to integrate with whatever backend services you already have, not replace them. This is good as it requires much less buy-in and you can evolve your systems towards GraphQL incrementally -- this was Facebook's own use case as well. To expose your service's data as GraphQL you need to give a server GraphQL framework a description of what your server data looks like and some code on how to obtain this data from the underlying services.

Both Dan and Nick spent the break after their talk answering a lot of questions by many interested developers, including myself. I spoke to Dan myself and I'm really grateful for all his informative answers.

The GraphQL and Relay developers at Facebook are explicitly hoping to build a real open source community around this technology, and they made a flying start this conference.

Flux

All this GraphQL and Relay stuff is exciting, but the way most people integrate React with backends at present is through variations on the Flux pattern. There were several talks that touched upon Flux during the conference. The talk that stood out was by Dan Abramov, who I mentioned earlier. This talk has already been released as an online video, and I recommend you watch it. In it Dan develops and debugs a client-side application live, and due to the ingenious architecture behind it, he can modify code and see the changes in the application's behavior immediately, without an explicit reload and without having to reenter data. It was really eye-opening.

What makes this style of development possible is a more purely functional approach towards the Flux pattern. Dan started the Redux framework, which focuses on making this kind of thing possible. Instead of definining methods that describe how to store data in some global store object, in Redux you define pure functions instead (reducers) that describe how to transform the store state into a new state.

Dan Abramov is interesting by himself. He has quickly made a big name for himself in the React community by working on all sorts of exciting new software and approaches, while being very approachable at the same time. He's doing open source right. He's also in his early 20s. I'm old enough to have run into very smart younger developers before, so his success is not too intimidating for me. I'll try to learn from what he does right and apply it in my own open source work.

The purely functional reducer pattern was all over the conference; I saw references to it in several other talks, especially Kevin Robinson's talk on simplifying the data layer, which explored the power of keeping a log of actions. It has its applications on both clients and servers.

The React web framework already set the tone: it makes some powerful functional programming techniques surrounding UI state management available in a JavaScript framework. The React community is now mining more functional programming techniques, making them accessible to JavaScript developers. It's interesting times.

Using React's component nature

There were several talks that touch on how you can use React's component nature. Ryan Florence gave an entertaining talk about you can incrementally rewrite an existing client-side application to use React components, step by step. Aria Buckles gave a talk on writing good reusable components with React; I recognized several mistakes I've made in my code and better ways to do it.

Finally in a topic close to my heart Evan Morikawa and Ben Gotow gave a talk about how to use React and Flux to turn applications into extensible platforms. Extensible platforms are all over software development. CMSes are a very common example in web development. One could even argue having an extensible core that supports multiple customers with customizations is the mystical quality that turns an application into "enterprise software".

DX: Developer Experience

The new abbreviation DX was used in a lot of talks. DX stands for "Developer Experience" in analogy with UX standing for "user experience".

I've always thought about this concept as usability for developers: a good library or framework offers a good user interface for developers who want to build stuff with it. A library or framework isn't there just to let developers get some done, but to let them get this stuff done well: smoothly, avoiding common mistakes, and not letting them down when they need to do something special.

I really appreciated the React community's emphasis on DX. Let's make the easy things easy, and the hard things possible, together.

Gender Diversity

This section is not intended as devastating criticism but as a suggestion. I'm not an expert on this topic at all, but I did want to make this observation.

I've attended a lot of Python conferences over the years. The gender balance at these conferences was in the early years much more like the React Europe conference: mostly men with a few women here and there. But in recent years there has been a big push in the Python community, especially in North America, to change the gender balance at these conferences and the community as a whole. With success: these days PyCons in North America attract over 30% women attendees. While EuroPython still has a way to go, last year I already noticed a definite trend towards more women speaking and attending. It was a change I appreciated.

Change like this doesn't happen by itself. React Europe made a good start by adopting a code of conduct. We can learn more about what other conference organizers do. Closer to the React community I've also appreciated the actions of the JSConf Europe organizers in this direction. Some simple ideas include to actively reach out to women to submit their talk and to reach out to user groups.

Of course, for all I know this was in fact done, in which case do please do keep it up! If not, that's my suggestion.

Conclusions

I really enjoyed this conference. There were a lot of interesting talks; too many to go into here. I met a lot of interesting people. Mining functional programming techniques to benefit mere mortal developers such as myself, and developer experience in general, are clearly major trends in the React community.

Now that I'm home I'm looking forward to exploring these ideas and technologies some more. Thank you organizers, speakers and fellow attendees! And then to think the conference will likely get even better next year! I hope I can make it again.

Build a better batching UI with Morepath and Jinja2

Introduction

This post is the first in what I hope will be a series on neat things you can do with Morepath. Morepath is a Python web micro framework with some very interesting capabilities. What we'll look at today is what you can do with Morepath's link generation in a server-driven web application. While Morepath is an excellent fit to create REST APIs, it also works well server aplications. So let's look at how Morepath can help you to create a batching UI.

On the special occasion of this post we also released a new version of Morepath, Morepath 0.11.1!

A batching UI is a UI where you have a larger amount of data available than you want to show to the user at once. You instead partition the data in smaller batches, and you let the user navigate through these batches by clicking a previous and next link. If you have 56 items in total and the batch size is 10, you first see items 0-9. You can then click next to see items 10-19, then items 20-29, and so on until you see the last few items 50-55. Clicking previous will take you backwards again.

In this example, a URL to see a single batch looks like this:

http://example.com/?start=20

To see items 20-29. You can also approach the application like this:

http://example.com/

to start at the first batch.

I'm going to highlight the relevant parts of the application here. The complete example project can be found on Github. I have included instructions on how to install the app in the README.rst there.

Model

First we need to define a few model classes to define the application. We are going to go for a fake database of fake persons that we want to batch through.

Here's the Person class:

class Person(object):
    def __init__(self, id, name, address, email):
        self.id = id
        self.name = name
        self.address = address
        self.email = email

We use the neat fake-factory package to create some fake data for our fake database; the fake database is just a Python list:

fake = Faker()

def generate_random_person(id):
    return Person(id, fake.name(), fake.address(), fake.email())

def generate_random_persons(amount):
    return [generate_random_person(id) for id in range(amount)]

person_db = generate_random_persons(56)

So far nothing special. But next we create a special PersonCollection model that represents a batch of persons:

class PersonCollection(object):
    def __init__(self, persons, start):
        self.persons = persons
        if start < 0 or start >= len(persons):
            start = 0
        self.start = start

    def query(self):
        return self.persons[self.start:self.start + BATCH_SIZE]

    def previous(self):
        if self.start == 0:
            return None
        start = self.start - BATCH_SIZE
        if start < 0:
            start = 0
        return PersonCollection(self.persons, start)

    def next(self):
        start = self.start + BATCH_SIZE
        if start >= len(self.persons):
            return None
        return PersonCollection(self.persons, self.start + BATCH_SIZE)

To create an instance of PersonCollection you need two arguments: persons, which is going to be our person_db we created before, and start, which is the start index of the batch.

We define a query method that queries the persons we need from the larger batch, based on start and a global constant, BATCH_SIZE. Here we do this by simply taking a slice. In a real application you'd execute some kind of database query.

We also define previous and next methods. These give back the previous PersonCollection and next PersonCollection. They use the same persons database, but adjust the start of the batch. If there is no previous or next batch as we're at the beginning or the end, these methods return None.

There is nothing directly web related in this code, though of course PersonCollection is there to serve our web application in particular. But as you notice there is absolutely no interaction with request or any other parts of the Morepath API. This makes it easier to reason about this code: you can for instance write unit tests that just test the behavior of these instances without dealing with requests, HTML, etc.

Path

Now we expose these models to the web. We tell Morepath what models are behind what URLs, and how to create URLs to models:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)

@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

Let's look at this in more detail:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)

This is not a lot of code, but it actually tells Morepath a lot:

  • When you go to the root path / you get the instance returned by the get_person_collection function.

  • This URL takes a request parameter start, for instance ?start=10.

  • This request parameter is optional. If it's not given it defaults to 0.

  • Since the default is a Python int object, Morepath rejects any requests with request parameters that cannot be converted to an integer as a 400 Bad Request. So ?start=11 is legal, but ?start=foo is not.

  • When asked for the link to a PersonCollection instance in Python code, as we'll see soon, Morepath uses this information to reconstruct it.

Now let's look at get_person:

@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

This uses a path with a parameter in it, id, which is passed to the get_person function. It explicitly sets the system to expect an int and reject anything else, but we could've used id=0 as a default parameter instead here too. Finally, get_person can return None if the id is not known in our Python list "database". Morepath automatically turns this into a 404 Not Found for you.

View & template for Person

While PersonCollection and Person instances now have a URL, we didn't tell Morepath yet what to do when someone goes there. So for now, these URLs will respond with a 404.

Let's fix this by defining some Morepath views. We'll do a simple view for Person first:

@App.html(model=Person, template='person.jinja2')
def person_default(self, request):
    return {
        'id': self.id,
        'name': self.name,
        'address': self.address,
        'email': self.email
    }

We use the html decorator to indicate that this view delivers data of Content-Type text/html, and that it uses a person.jinja2 template to do so.

The person_default function itself gets a self and a request argument. The self argument is an instance of the model class indicated in the decorator, so a Person instance. The request argument is a WebOb request instance. We give the template the data returned in the dictionary.

The template person.jinja2 looks like this:

<!DOCTYPE html>
<html>
  <head>
    <title>Morepath batching demo</title>
  </head>
  <body>
    <p>
      Name: {{ name }}<br/>
      Address: {{ address }}<br/>
      Email: {{ email }}<br />
    </p>
  </body>
</html>

Here we use the Jinja2 template language to render the data to HTML. Morepath out of the box does not support Jinja2; it's template language agnostic. But in our example we use the Morepath extension more.jinja2 which integrates Jinja2. Chameleon support is also available in more.chameleon in case you prefer that.

View & template for PersonCollection

Here is the view that exposes PersonCollection:

@App.html(model=PersonCollection, template='person_collection.jinja2')
def person_collection_default(self, request):
    return {
        'persons': self.query(),
        'previous_link': request.link(self.previous()),
        'next_link': request.link(self.next()),
    }

It gives the template the list of persons that is in the current PersonCollection instance so it can show them in a template as we'll see in a moment. It also creates two URLs: previous_link and next_link. These are links to the previous and next batch available, or None if no previous or next batch exists (this is the first or the last batch).

Let's look at the template:

<!DOCTYPE html>
<html>
 <head>
   <title>Morepath batching demo</title>
  </head>
  <body>
    <table>
      <tr>
        <th>Name</th>
        <th>Email</th>
        <th>Address</th>
      </tr>
      {% for person in persons %}
      <tr>
        <td><a href="{{ request.link(person) }}">{{ person.name }}</a></td>
        <td>{{ person.email }}</td>
        <td>{{ person.address }}</td>
      </tr>
      {% endfor %}
    </table>
    {% if previous_link %}
    <a href="{{ previous_link }}">Previous</a>
    {% endif %}
    {% if next_link %}
    <a href="{{ next_link }}">Next</a>
    {% endif %}
  </body>
</html>

A bit more is going on here. First it loops through the persons list to show all the persons in a batch in a HTML table. The name in the table is a link to the person instance; we use request.link() in the template to create this URL.

The template also shows a previous and next link, but only if they're not None, so when there is actually a previous or next batch available.

That's it

And that's it, besides a few details of application setup, which you can find in the complete example project on Github.

There's not much to this code, and that's how it should be. I invite you to compare this approach to a batching UI to what an implementation for another web framework looks like. Do you put the link generation code in the template itself? Or as ad hoc code inside the view functions? How clear and concise and testable is that code compared to what we just did here? Do you give back the right HTTP status codes when things go wrong? Consider also how easy it would be to expand the code to include searching in addition to batching.

Do you want to try out Morepath now? Read the very extensive documentation. I hope to hear from you!

GraphQL and REST

Introduction

There is a new trend in open source that I'm not sure I like very much: big companies announce that they are going to open source something, but the release is nowhere in sight yet. Announcing something invites feedback, especially if it's announced as open source. When the software in question is available already as closed source for people to play with I don't really mind as feedback is possible, though it makes me wonder what the point is of holding back on a release.

It's a bit more difficult to give feedback on something when the thing announced is still heavily in development with no release in sight. How can one give feedback? But since you announced you'd open source it, I guess we should not be shy and give you feedback anyway.

Facebook has been doing this kind of announcement a lot recently: they announced React Native before they released it. They also announced Relay and GraphQL, but both have not yet been released. They've given us some information in a few talks and blog posts, and the rest is speculation. If you want to learn more about Relay and GraphQL I highly recommend you read these slides by Laney Kuenzel.

From the information we do have, Relay looks very interesting indeed. I think I'd like to have the option use Relay in the future. That means I need to implement GraphQL somehow, or at least something close enough to it, as it's the infrastructure that makes Relay possible.

React is a rethink of how to construct a client-side web application UI. React challenges MVC and bi-directional data binding. GraphQL is a rethink of the way client-side web applications talk to their backend. GraphQL challenges REST.

REST

So what was REST again? Rich frontend applications these days typically talk to a HTTP backend. These backends follow some basic REST patterns: resources are on URLs and you can interact with them using HTTP verbs. Some resources represent single items:

/users/faassen

If you issue a GET request to it you get its representation, typically in JSON. You can also issue a PUT request to it to overwrite its representation, and DELETE to delete it.

In a HTTP API we typically also have a way to access collections:

/users

You can issue GET to this too, possibly with some HTTP query parameters to do a filter, to get a representation of the users known to the application. You can also issue POST to this to add a new user.

Real proper REST APIs, also known as Hypermedia APIs, go beyond this: they have hyperlinks between resources. I wrote a web framework named Morepath which aims to make it easier to create complex hypermedia APIs, so you can say I'm pretty heavily invested in REST.

Challenging REST

GraphQL challenges REST. The core idea is that the code that best knows what data is needed for a UI is not on the server but on the client. The UI component knows best what data it wants.

REST delivers all the data a UI might need about a resource and it's up to the client to go look for the bits it actually wants to show. If that data is not in a resource it already has, the client needs to go off to the server and request some more data from another URL. With GraphQL the UI gets exactly the data it needs instead, in a shape handy for the UI.

As we can see here, a GraphQL query looks like this:

 {
   user(id: 400) {
     id,
     name,
     isViewerFriend,
     profilePicture(size: 50)  {
       uri,
       width,
       height
     }
   }
}

You get back this:

{
  "user" : {
    "id": 4000,
    "name": "Some name",
    "isViewerFriend": true,
    "profilePicture": {
      "uri": "http://example.com/pic.jpg",
      "width": 50,
      "height": 50
    }
  }
}

This says you want an object of type user with id 4000. You are interested in its id, name and isViewerFriend fields.

You also want another object it is connected to: the profilePicture. You want the uri, width and height fields of this. While there is no public GraphQL specification out there yet, I think that size: 50 means to restrict the subquery for a profile picture to only those of size 50. I'm not sure what happens if no profilePicture of this size is available, though.

To talk to the backend, there is only a single HTTP end-point that receives all these queries, or alternatively you use a non-HTTP mechanism like web sockets. Very unRESTful indeed!

REST and shaping data

Since I'm invested in REST, I've been wondering about whether we can bring some of these ideas to REST APIs. Perhaps we can even map GraphQL to a REST API in a reasonably efficient way. But even if we don't implement all of GraphQL, we might gain enough to make our REST APIs more useful to front-end developers.

As an exercise, let's try to express the query above as a REST query for a hypothetical REST API. First we take this bit:

user(id: 4000) {

We can express this using a path:

/users/4000

The client could construct this path by using a URI template (/users/{id}) provided by the server, or by following a link provided by the server, or by doing the least RESTful thing of them all: hardcode the URL construction in client code.

How do we express with HTTP what fields a user wants? REST of course does have a mechanism that can be used to shape data: HTTP query parameters. So this bit:

id,
name,
isViewerFriend,

could become these query parameters:

?field=id&field=name&field=isViewerFriend

And the query would then look like this:

/users/4000?field=id&field=name&field=isViewerFriend

That is pretty straightforward. It needs server buy-in, but it wouldn't be very difficult to implement in the basic case. The sub-query is more tricky. We need to think of some way to represent it in query parameters. We could do something like this (multi-line for clarity):

?field=profilePicture&
  filter:profilePicture.size=50&
  field=profilePicture.uri&
  field=profilePicture.width&
  field=profilePicture.height

The whole query now looks like this:

/users/4000?
  field=id&
  field=name&
  field=isViewerFriend&
  field=profilePicture&
  filter:profilePicture.size=50&
  field=profilePicture.uri&
  field=profilePicture.width&
  field=profilePicture.height

The result of this query would look the same as in the GraphQL example. It's important to notice that this REST API is not fully normalized -- the profilePicure data is not behind a separate URL that the client then needs to go to. Instead, the object is embedded in the result for the sake of convenience and performance.

I'd be tempted to make the server send back some JSON-LD to help here: each object (the user object and the profileData subobject) can have an @id for its canonical URL and a @type as a type designator. A client-side cache could exploit this @id information to store information about the objects it already knows about. Client-side code could also use this information to deal with normalized APIs transparently: it can automatically fetch the sub-object for you if it's not embedded, at the cost of performance.

Does this REST API have the same features as the GraphQL example? I have no idea. It probably depends especially on how queries of related objects work. GraphQL does supply caching benefits, which you wouldn't have without more work. On the other hand you might be able to exploit HTTP-level caching mechanisms with this REST-based approach. Then again, this has more HTTP overhead, which GraphQL can avoid.

Let's briefly get back to the idea to automatically map GraphQL to a REST API. What is needed is a way to look up a URI template for a GraphQL type. So, for the user type we could connect it to a URI template /users/{id}. The server could supply this map of GraphQL type to URI template to the server, so the server can make the translation of the GraphQL to the REST API.

Further speculation

What about queries for multiple objects? We could use some kind of collection URL with a filter:

/user?filter:startsWith=a

It is normal in REST to shape collection data this way already, after all. Unfortunately I have no clear idea what a query for a collection of objects looks like in GraphQL.

I've only vaguely started thinking about mutations. If you can access the objects's URL in a standard way such as with an @id field, you can then get a handle on the object and send it POST, PUT and DELETE requests.

Conclusion

All this is wild speculation, as we don't really know enough about GraphQL yet to fully understand its capabilities. It's quite possible that I'm throwing away some important property of GraphQL away by mapping it to a REST API. Scalability, for instance. Then again, usually my scalability use cases aren't the same as Facebook's, so I might not care as long as I get Relay's benefits to client-side code development.

It's also possible that it's actually easier to implement a single GraphQL-based endpoint than to write or adapt a REST API to support these patterns. Who knows.

Another question I don't have the answer to is what properties a system should have to make Relay work at all. Does it need to be exactly GraphQL, or would a subset of its features work? Which properties of GraphQL are essential?

Thank you, GraphQL and Relay people, for challenging the status quo! Though it makes me slightly uncomfortable, I greatly appreciate it. Hopefully my feedback wasn't too dumb; luckily you can't blame me too much for that as I can legitimately claim ignorance! I'm looking forward to learning more.

Server Templating in Morepath 0.10

Introduction

I just released Morepath 0.10 (CHANGES)! Morepath is a modern Python web framework that combines power with simplicity of use. Morepath 0.10's biggest new feature is server-side templating support.

Most Python web frameworks were born at a time when server-side templating was the only way to get HTML content into a web browser. Templates in the browser did not yet exist. Server templating was a necessity for a server web framework, built-in from day 1.

The web has changed and much more can be done in the browser now: if you want a web page, you can accomplish it with client-side JavaScript code, helped by templates, or embedded HTML-like snippets in JavaScript, like what the React framework does. Morepath is a web framework that was born in this new era.

Morepath could take a more leisurely approach to server templating. We recommend that users rely on client-side technology to construct a UI -- something that Morepath is very good at supporting. For many web applications, this approach is fine and leads to more responsive user interfaces. It also has the benefit that it supports a strong separation between user interface and underlying data. And you could still use server template engines with Morepath, but with no help from the framework.

But there is still room for server templates. Server-generated HTML has its advantages. It's the easiest way to create a bookmarkable traditional web site -- no client-side routing needed. For more dynamic web applications it can also sometimes make sense to send a server-rendered HTML page to the client as a starting point, and only switch to a client-side dynamic code later. This is useful in those cases where you want the end-user to see a web page as quickly as possible: in that case sending HTML directly from the server can still be faster, as there is no need for the browser to load and process JavaScript in order to display some content.

So now Morepath has now, at last, gained server template support, in version 0.10. We took our time. We prototyped a bit first. We worked out the details of the rest of the framework. As we will see, it's nice we had the chance to spend time on other aspects of Morepath first, as that infrastructure now also makes template language integration very clean.

The basics

Say you want to use Jinja2, the template language used by Flask, in Morepath. Morepath does not ship with Jinja2 or any other template language by default. Instead you can install it as a plugin in your own project. The first thing you do is modify your project's setup.py and add more.jinja2 to install_requires:

install_requires=[
  'more.jinja2',
],

Now when you install your project's dependencies, it pulls in more.jinja2, which also pulls in the Jinja2 template engine itself.

Morepath's extension system works through subclassing. If you want Jinja2 support in your Morepath application, you need to subclass your Morepath app from the Jinja2App:

from more.jinja2 import Jinja2App

class App(Jinja2App):
    pass

The App class is now aware of Jinja2 templates.

Next you need to tell your app what directory to look in for templates:

@App.template_directory()
def get_template_directory():
    return 'templates'

This tells your app to look in the templates directory next to the Python module you wrote this code in, so the templates subdirectory of the Python package that contains your code.

Now you can use templates in your code. Here's a HTML view with a template:

@App.html(model=Customer, template='customer.jinja2')
def customer_default(self, request):
    return {
      'name': self.name,
      'street': self.street,
      'city': self.city,
      'zip': self.zip_code
    }

The view returns a dictionary. This dictionary contains the variables that should go into the customer.jinja2 template, which should be in the templates directory. Note that you have to use the jinja2 extension, as Morepath recognizes how to interpret a template by its extension.

You can now write the customer.jinja2 template that uses this information:

<html>
<body>
  <p>Customer {{name}} lives on {{street}} in {{city}}.</p>
  <p>The zip code is {{zip}}.</p>
</body>
</html>

You can use the usual Jinja2 constructs here.

When you access the view above, the template gets rendered.

Chameleon

What if you want to use Chameleon (ZPT) templates instead of Jinja2 templates? We've provided more.chameleon that has this integration. Include it in install_requires in setup.py, and then do this to integrate it into your app:

from more.chameleon import ChameleonApp

class App(ChameleonApp):
    pass

You can now set up a template directory and put in .pt files, which you can then refer to from the template argument to views.

You could even subclass both ChameleonApp and Jinja2App apps and have an application that uses both Chameleon and Jinja2 templates. While that doesn't seem like a great idea, Morepath does allow multiple applications to be composed into a larger application, so it is nice that it is possible to combine an application that uses Jinja2 with another one that uses Chameleon.

Overrides

Imagine there is an application developed by a third party that has a whole bunch of templates in them. Now without changing that application directory you want to override a template in it. Perhaps you want to override a master template that sets up a common look and feel, for instance.

In Morepath, template overrides can be done by subclassing the application (just like you can override anything else):

class SubApp(App):
    pass

@SubApp.template_directory()
def get_template_directory_override():
    return 'override_templates'

That template_directory directive tells SubApp to look for templates in override_templates first before it checks the templates directory that was set up by App.

If we want to override master.jinja2, all we have to do is copy it from templates into override_templates and change it to suit our purposes. Since a template with that name is found in override_templates first, it is found instead of the one in templates. The original App remains unaffected.

Implementation

In the introduction we mentioned that the template language integration code into Morepath is clean. You should be able to integrate other template engines easily too. Here is the code that integrates Jinja2 into Morepath:

import os
import morepath
import jinja2


class Jinja2App(morepath.App):
    pass


@Jinja2App.setting_section(section='jinja2')
def get_setting_section():
    return {
        'auto_reload': False,
    }


@Jinja2App.template_loader(extension='.jinja2')
def get_jinja2_loader(template_directories, settings):
    config = settings.jinja2.__dict__.copy()

    # we always want to use autoescape as this is about
    # HTML templating
    config.update({
        'autoescape': True,
        'extensions': ['jinja2.ext.autoescape']
    })

    return jinja2.Environment(
      loader=jinja2.FileSystemLoader(template_directories),
      **config)


@Jinja2App.template_render(extension='.jinja2')
def get_jinja2_render(loader, name, original_render):
    template = loader.get_template(name)

    def render(content, request):
        variables = {'request': request}
        variables.update(content)
        return original_render(template.render(**variables), request)
    return render

The template_loader directive sets up an object that knows how to load templates from a given list of template directories. In the case of Jinja2 that is the Jinja2 environment object.

The template_render directive then tells Morepath how to render individual templates: get them from the loader first, and then construct a function that given content returned by the view function and request, uses the template to render it.

Documentation

For more documentation, see the Morepath documentation on templates.

Let us know what you think!