Category: Application

OSMaxx: the easy access to OSM data

OpenStreetMap internet is much more than a free map of the world. It’s a huge geo-database, which is still growing and improving in quality. OpenStreetMap is a great project in many respects!
But because it is a community project, where basically everyone can contribute, it has some particularities, which are rather uncommon in authoritative data sets. There, data is generated according to a pre-fixed data standard. Thus, (in an ideal world) the data are consistent in terms of attribute structure and values. In contrast, attribute data in OpenStreetMap can exhibit a certain degree of (semantic) heterogeneity, misclassifications and errors. The OSM wiki internet helps a lot, but it is not binding.
datamodel_networkAnother particularity of OpenStreetMap is the data model. Coming from a GIS background I was taught to represent spatial networks internet as a (planar) graph with edges and nodes. In the case of transportation networks, junctions are commonly represented by nodes and the segments between as edges. OpenStreetMap is not designed this way. Without going into details, the effect of OSM’s data model internet is that nodes are not necessarily introduced at junctions. This doesn’t matter for mapping, but for network analysis, such as routing!

In 2014 I presented internet and published internet an approach that deals with attributive heterogeneity in OSM data. Later I joined forces with Stefan Keller internet from the University of Applied Sciences in Rapperswil, Switzerland and presented our work internet at the AAG annual meeting 2015 in Chicago.
Since then Stefan and his team have lifted our initial ideas of harmonized attribute data to an entire different level. They formalized data cleaning routines, introduced subordinate attribute categories and developed an OSM export service, which generates real network graphs from OSM data. The result is just brilliant!

osmaxx

Two maps with very different scale made from the same data set.

The service can be accessed via osmaxx.hsr.ch internet. There, a login with an OSM account is required. Users can then choose whether they go with an existing excerpt or define an individual area of interest. In the latter case the area can be clipped on a map and the export format (from Shapefiles to GeoPackage to SQLite DB) and spatial reference system can be chosen. The excerpt is then processed and published on a download server. At this stage I came across the only shortcoming of the service: you don’t get any information that the processing of the excerpt takes up to hours (see here internet).
However, the rest of the service is just perfect. After “Hollywood has called” the processed data set can be downloaded from a web server.

OSMaxx interface.

OSMaxx interface.

osmaxx-downloadThe downloaded *.zip file contains three folders: data, static and symbology. The first contains the data in the chosen format. In the static folder all licence files and metadata can be found. The latter is especially valuable, because it contains the entire OSMaxx schema documentation. This excellent piece of work, which is the “brain” of the service is also available on GitHub internet. Those who are interested in data models and attribute structure should definitely have a look at this!
The symbology folder contains three QGIS map documents and a folder packed full with SVG map symbols. The QGIS map documents are optimized for three different scale levels. They can be used for the visualization of the data. I’ve tried them for a rather small dataset (500 MB ESRI File Geodatabase), but QGIS (2.16.3) always crashed. However, I think there is hardly any application context where the entire content of an OSM dataset needs to be visualized at once.

Of course, OSMaxx is not the first OSM export service. But besides the ease of use and the rich functionality (export format, coordinate system and level of detail), the attribute data cleaning and clustering are real assets. With this it is easy, for example, to map all shops in a town or all roads where motorized vehicles are banned. Using the native OSM data can make such a job quite cumbersome.
I have also tried to use the data as input for network analysis. Although the original OSM road data are transformed into a network dataset (ways are split into segments at junctions), the topology (connectivity) is invalid at several locations in the network. Before the data are used for routing etc., I would recommend a thoroughly data validation. For the detection of topological errors in a network see this post internet. Maybe a topology validation and correction routine can be implemented in a future version of OSMaxx.

In the current version the OSMaxx service is especially valuable for the design of maps that go beyond standard OSM renderings. But the pre-processed data are also suitable for all kinds of spatial analyses, as long as (network) topology doesn’t play a central role. Again, mapping and spatial analysis on the basis of OSM data was possible long before OSMaxx, but with this service it isn’t necessary to be an OSM expert and thus, I see a big potential (from mapping to teaching internet) for this “intelligent” export service.

Advertisements

Mysterious bicycle routing …

routechoiceThis is only a quick note on a recent observation I’ve made while using bicycle routing portals on the web. However, the relevance of data quality and implemented model routines becomes obvious very nicely. And because I’ve been struggling with these issues for quite a while now and things don’t necessarily turn to the better, I’m curious about your ideas on the following examples.

Imagine an absolutely normal situation in your daily mobility routines. You are at location A and you need to go to location B. Because you are a good guy, you choose the bicycle as your preferred mode of transport. What do you do? Of course you consult a routing service on the web, either via your desktop browser or mobile app.
But which service do you trust, which recommendations are reliable and relevant to you? Give it a try.

  1. For many people the big elephant Google Maps is their first choice . Whether you like it or not, Google has made a big leap forward with their bicycle routing service.

    routing1_google_maps

  2. Because you love OpenstreetMap and the GIScience group internet at Heidelberg University did a great job, you try the bicycle version of OpenRouteService. What you get is what you know from Google.

    routing2_openrouteservice

  3. If you consult another routing portal that is based on OSM data, you might get surprised. Naviki suggests the following route:

    routing3_openrouteservice

  4. So far we’ve tried a commercial service and two platforms which are fueled by crowd-sourced, open data. Let’s turn to authoritative data now. The goal of the federal routing service VAO is primarily the provision of a multi-modal routing service, with a focus on public transport. The bicycle version gives you this recommendation:

    routing4_vao

  5. The bicycle routing portal for the city and federal state of Salzburg, Radlkarte.info, is designed for the specific needs of utilitarian bicyclists. The data base is identical to the VAO service, but the result differs significantly.

    routing5_radlkarte

The intention of this blog post is not to assess the quality (validity, reliability, relevance) of the routing recommendations as such. What I want to point to is the fact, that three different service, with different data sources in the back result in exactly the same routing recommendation, whereas services that are built upon the same data result in significantly different suggestions. That’s really mysterious. And it tells me, that the data and data quality is only one side of the medal. Obviously the parametrization of the routing engine and implemented model routines have a huge impact on the result. By the way, for all five examples, I’ve used the default settings.
Following the argument of the impact of parametrization and modelling, one can conclude that it is not so much about the data (they seem to be of adequate quality in all three cases), but about how well you know the user’s specific needs and preferences and turn this knowledge in appropriate models and services. Thus the next consequent step is to offer users the possibility to influence the parametrization of the routing engine in order to get what he or she expects to get: routing recommendations that perfectly fit their preferences.
Do you know routing services on the web that a allow for a maximum personalization (not only pre-defined categories)? To which degree would users benefit from personalized routing? And finally, would bicyclists use it at all? Let me know what you think and share your ideas!

Quality of Life

Searching for the term “Quality of Life” in Google Scholar results in nearly 2.5 million entries and although the year is still young, 16,500 of these papers have been published in 2015! Obviously there is large interest in quality of life.
Currently several researchers of our department (mainly in Prof. Blaschke’s working group internet) are working on this topic; mostly in transdisciplinary project collaborations. Out of this, a miniconference has been organized this week as part of a PhD intensive week. I had the opportunity to give a kind of workbench report of our research group, presenting three recently developed analysis tools for planning purposes. Before I provide details and reflect on some of the other contributions, a fundamental question needs to be dealt with: “Quality of Life?? What is it all about?” Right, what is it actually about and why do geographers care about it?

What many people associate with “Quality of Life” …

First, many different disciplines use the term “Quality of Life” for their particular field of interest. Thus the first challenge is to get some orientation in the jungle of termini technici and buzzwords. I’ll try to briefly do this …
Being interested from a scientific perspective I first search for journals. The journal with the most promising title Quality of Life Research internet turns out to exclusively deal with aspects of life quality in the context of medical treatment. The journal’s very first paper, published in 1992, investigates how the quality of life changes in cases of head and neck cancer.
Because this domain-specific approach is rather different from the common understanding in my field, I consult the disambiguation page internet in the Wikipedia and found it quite handy. At least it helps me to learn that not only medical professionals but also movie makers seem to have some interest in the topic. Apart from these insights a quote in a linked Wikipedia article internet by economist Robert Costanza catches my interest: quality of life has been an explicit policy goal. Wow – policy for the people and not for the governmental/administrative body. Sounds good. And it might bring me closer to the terminological understanding of my colleagues.

Second, “Quality of Life” is an anthropocentric approach. In Costanza’s journal article internet (from where above-mentioned quote is drawn), QoL is tightly bound to human needs on the one hand and subjective well-being on the other. Based on this definition, one can assesses the current situation, both on an individual and aggregated level, and derive policy implications (or better recommendations) from these findings. What is missing in Constanza’s transdisciplinary (psychology, medicine, economics, environmental science and sociology are mentioned) and rather generic (this is why the article helps to gain an overview) approach is the spatial dimension. Putting the QoL or well-being of individuals into the focus, necessarily requires to consider his/her environment, which at least partly consists of physical entities (human’s physical habitat so to say). This naïve observation might bring me even closer to why geographers discovering QoL as a beneficial research topic.

Third, “Quality of Life” is a (implicitly) spatial topic. Scholars such as Robert Marans internet applied the QoL concept to urban environments. In the work of Marans et al. the focus primarily lies on the perceptional and/or emotional attitudes people have to their immediate environment. From the overlay of geographical space with meaning the central term place is derived. This intersection is exactly where geographers can contribute their conceptual models and analytical tools in order to serve as a hinge for several disciplines: from environmental psychologists to traffic engineers to public administration. As it should have become clear so far, “Quality of Life” covers lots of approaches, initiatives and domains. And even within the framework of place-based QoL research it is not always clear if everyone speaks the same language … what directly brings me to a short reflection of the QoL miniconference.

The event emerged from several preceding, joint initiatives by Prof. Blaschke (ZGIS) and Prof. Keul internet, an environmental psychologist. The latter is primarily interested in how certain, measurable parameters, such as air quality, noise or urban greenness, are perceived and emotionally experienced. Equipped with profound statistical skills, he then tries to find correlations between these two categories. If there are causal relations (for example between noise in dB and degree of annoyance), the respective parameters can serve as proxies for the average perception or, in a more general sense, “Quality of Life”. In such cases the study design is straight: the individual’s subjective perception of objective (measurable) parameters is aggregated and related to each other, based on a large enough sample size.
What I found much harder to follow were presentations in which “objective” and “subjective” indicators got mixed up. The digital representation of the physical environment and structural descriptions of it, were generally named objective indicators (building characteristics, distance to PT stop, number of facilities of daily need within a certain distance etc.). As additional indicators the collective perception of geographical space (= place) were implemented in integrated assessment or whatever tools. I don’t know if I got something wrong or some of the presenters indeed used a fuzzy concept of objectivity and subjectivity. Referring to the figure below, I had concept (b) in mind in the context of objective and subjective QoL indicators so far. But – given I’ve understood them correctly – some colleagues build der work on concept (a), which is very much different from (b).

Until some of the presentations I had concept (b) in mind in the context of objective and subjective QoL indicators. As far as I understood some of the other presentations, they were built on concept (a).

A third group – actually the majority of contributors, including myself – presented methods and tools that helped to either capture a city’s physical structure or simulate collective human behaviour. Others provided tools to model and assess urban environments, based on expert knowledge and/or collected input of (affected/involved) people.
In my presentation I focused on the contribution of bikeability to liveable cities. Following the layer concept sketched above, this could be just another layer to consider in an overall QoL assessment. We have recently developed three analysis approaches that could help to assess the immediate environment of facilities in terms of bicycle safety, assess the accessibility of facilities and a third one, that helps to simulate potential changes in the road space and their respective effect on bikeability. Here are my slides:

 

Apart from the effort we plan to invest in the presented planning tools anyway, I could imagine to contribute to further joint initiatives in the QoL context. But what would be desperately needed – at least within our local group – is a precise, common understanding of the “Quality of Life” concepts and contributing parameters.

Spatial modelling with OGD and OSM data

While Open Government Data internet are currently a big deal in the German-speaking countries, the OpenStreetMap project celebrates its 10th anniversary internet. How these different data sources can be dealt with in spatial modelling approaches and how they can even be used in combination were the two major topics of a presentation, I’ve given last friday at a UNIGIS workshop internet in Salzburg.

Spatial modelling allows for interpreting and relating data for specific applications, without necessarily manipulating them. Neglecting this option and building applications directly on databases can result in rather weird and/or useless results. The reason for this is simple: generally data are captured for a certain purpose. Naturally, this purpose decides on the data model, the attribute structure or the data maintenance. And these determining factors might diverge from the requirements of the intended application.

Data are like screws (or any other basic element) which can be used for various machines/final products. But how screws (and all the other elements) are arranged is not an inherent characteristic. A plan (model) is needed in order to get the intended product.

In the case of OGD the published data are made available by different public agencies. For example the responsible department is obliged by law to monitor air quality and, in case, intervene efficiently. Thus different parameters are sensed for this very purpose. When these data are being published as OGD one can, for example, use them for building a “health map”. But in such a case the direct visualization of micrograms and PPMs of the sensed pollutants wouldn’t make much sense. The data need to be interpreted, aggregated, classified, related – in short – modelled in order to fit the intended purpose of the map.
A similar mechanism holds true for data from the OpenStreetMap project. Originally the data were mapped for the purpose of building a free world map. Meanwhile the extent of the database has grown enormously and the data can be used for much more sophisticated applications than a “simple” world map. But again, if the data – and especially the attributes – which were originally collected for a specific purpose are being used in any other context, they have to be processed and modelled.

When applications are built on not only one dataset which was originally created for a different purpose, but on several datasets (e.g. because the data availability ends at the border of an administrative unit), the necessity of modelling is given anyway. As an example I’ve referred to our current work in the context of the web application Radlkarte internet.
Here it was necessary to combine authoritative data (mainly published as OGD) with crowd-sourced data. Because of the fundamental differences between these data sources – concerning the data model, attribute structure, data quality and the competence for data management – evaluation and correction routines, as well as an extensive modelling workflow had to be implemented. But, as it could have been demonstrated in the presentation, this effort pays off significantly when the validity and plausibility of the results are being examined.
Geographical information systems (GIS) are intuitive and performing environments for the implementation of such multi-stage workflows. They allow for the data storage and management in spatial databases, provide modelling interfaces and facilitate immediate analysis and visualization capacities.

Slope-sensitive bicycle routing

Bicyclists are sensitive to the topography. They are either eager for challenging up- and downhill routes or they want to ultimately avoid steep slopes. Although some routing applications offer option such as “avoid steep hills/slopes/etc.” it is still unclear how the topography can be reasonably considered in routing recommendations – in fact, there are lots of influential variables to be considered from trip purpose to technical equipment.
Other, more conceptional and still open questions are first, the degree to which the topography contributes to route choices and second, how to handle downhill sections in a recommendation model. In this post I’ll concentrate on these aspects.

There are some (not too much!) scientific publications and applications which deal, mostly among others, with the topography’s influence on route choice or recommendation models. Here is a short (and very probably incomplete) overview:

  • Arndt Brenschede’s work on this subject, which he partly presented at this year’s FOSSGIS conference (link internet to presentation), mainly deals with the question on how to extract slope information from SRTM data and use them for further modeling. For his demo-routing application he applies a hysterese filter the processing of SRTM data and a kinematic model for routing recommendations. The results are of good quality for long-distance routes. For urban route planning the resolution of the data is too coarse.
  • In Menghini et al. (2010, PDF internet) a route choice model is set up based on a large data set of GPS tracks. Concerning the question of topography for routing recommendations, they conclude that the maximum gradient is by far more relevant for route choices than the average gradient.
  • Sener et al. (2009, PDF internet) and Hood et al. (2011, PDF internet) both found that the bicyclist’s sex and the trip purpose ultimately influence the preference/avoidance of steep slopes. Women and commuters tend to avoid routes with high gradients, whereas men – especially in cases of leisure trips – generally prefer hilly routes.
  • In a very interesting study by Parkin et al. (2008, PDF internet) 3% gradient is defined as threshold for when slopes are perceived as being (too) steep for bicyclists. The advantage of this study is, that it focuses exclusively on commuters. Thus the samples are better comparable than in most other studies. The downside is, that the calculations are done for census blocks and not network-based.
  • Similar findings are documented by Broach et al. (2012, PDF internet). They define 2% gradient as critical threshold. Additionally they found out, that bicyclists are willing to tolerate up to 70% longer distances in order to avoid uphill sections with 2-4% gradient.
  • Troped et al. (2001, PDF internet) were able to measure the influence of a hilly topography on the frequency of bicycle usage. In this context they defined the term “steep hill barrier”, which stands for a limited access to adequate bicycle infrastructure due to big height differences. Interestingly, surveyed users systematically tended to overestimate the slope gradient. Thus they perceived slopes steeper as they actually were.

Summing up a first literature research we can conclude the following:

  1. They quality/resolution of the data is decisive for any further analysis.
  2. General assumptions for all bicyclists are hardly to be made. Sex and trip purpose are two key factors for the tolerance of uphill sections.
  3. 2-3% gradient can be regarded as significant threshold.
  4. The bicycle’s share in the modal split and route choices are influence by the terrain.
  5. In all studies and applications only uphill sections are considered. I’ve found nothing about downhill sections.
  6. The role of topography in this context is still subject to research (see Heinen et al. 2010, PDF internet)

In the context of a current project we were asked to develop a demo version of a slope sensitive routing application for bicyclists. The focus of the project primarily lies on safety. Thus the existing routing application provides not only the shortest route but also the safest (or most bicycle friendly). This is how we’ve built a slope sensitive function on top of it:

I. General considerations and prerequisites

newton

“What goes up must come down.” (Isaac Newton)

If routing applications offer the option to avoid steep slopes, generally uphill sections are meant. Although we know since Newton that “what goes up must come down”, it seems that nobody cares about downhill sections. At least from a safety point of view this is a big shortcoming!
For the recommendation of the most bicycle friendly routes the current routing engine minimizes the cumulative value of a safety index (for how this index can be calculated refer to a previous post internet). In the slope sensitive version this index should be manipulated according a pre-processed, continuous function and potential user input.

II. Terrain data

generalization_slopeIn order to derive the gradient for each segment of a digital street network, a digital terrain model with 10m resolution was overlayed with the line geometry. Thus a height value for each vertex of the line geometry could have been extracted. For the calculation of the mean gradient the height difference of the start and end point of each segment was calculated. Based on this, together with the segment’s length, we derived the gradient for both directions (sign!). Due to the comparably short length of the segments in our network (< 100m) this generalization (see figure) could have been tolerated.

III. Building slope classes

Hardly nobody is able to estimate gradients accurately. Most often ordinally scaled, verbal descriptions are used (for example, “This road is far to steep.” or “This road has a gentle slope.”). In order to reflect this in our model and to provide easy to understand input parameters, we built the following slope classes (the classification is based on examples from literature and expert’s input):

Verbal description

Gradient interval

Network coverage (cum.)

Level

0 – 1,5 %

72,3 %

Little slope

> 1,5 – 3 %

84,1 %

Gentle slope

> 3 – 6 %

91,9 %

Steep slope

> 6 – 12 %

97 %

Very steep slope

> 12 %

100 %


IV. Useing a manipulated index value as impedance in a routing

The idea now is to manipulate the existing index value according to the slope class. The function which is used to calculate the factor aims to reflect both, safety concerns and preferences. This means for uphill rides that the steeper a slope is the higher (worse) the index value becomes. For downhill sections the function results in better index values for low gradients and higher values for steep slopes. Thus the safety concern of potentially fast downhill rides are sucessfully reflected. For segments in the first slope class (level) the index value remains as it is.

Factor to manipulate the index value according to the respective slope class.

Factor to manipulate the index value according to the respective slope class.

If, in a later step, user input should be considered for the route calculation, the function can be easily adapted. In the current demo status the manipulated index value is used as impedance in the routing engine. There it can be easily compared with the original index value.
In regions with a low topographic variance the manipulation of the index value has virtually no effect on the routing results. The same holds true for areas with very short slopes, such as underpasses or alleys in historic city centers. But as the figure below demonstrates, the slope sensitivity can influence a routing result significantly, if the terrain tends to be hilly:

Different routing recommendations depending on slope sensitivity and direction.

Different routing recommendations depending on slope sensitivity and direction (visualized in Google Earth).

Although this is only a demo and several parameters might need to be optimized, first results indicate that the general approach is promising. What can be regarded as really innovative is the explicit consideration of the driving direction. This makes it possible to model uphill and downhill sections separately from each other. To my current knowledge there is no routing application for bicyclists on the web where this function is implemented.
What is missing in this demo are the different attitudes to slopes depending on the user and the trip purpose. Currently several options are offered but in a further step the personalization of routing recommendations is defenitly a hot topic!

I guess, there are many bicycle enthusiasts out there who have valuable inputs – please share them. Your comments, experiences and ideas are important as the body of scientific literature for this topic is very thin. Looking forward to reading from you!

 

P.S.: thanks to the guys from TraffiCon (Stefan, Gernot und Martin) for the stimulating discussions.

How to deal with attribute gaps in OSM data?

Since the start of the OpenStreet Map project numerous studies have been dealing with the “quality” of this crowdsourced data set. In a previous post internet I’ve shown how relative the “quality” of a data set can be. Interestingly, this post got by far the most views – only as a side note.
Anyway, most studies that are dealing with the quality of OSM data focus on the geometric characteristics. Haklay & Ellul (2010) investigate the completeness of OSM (compared to Ordnance Survey data) in the UK. Helbich et al. (2012) compare the spatial accuracy of OSM and TomTom data. And, just to name a third example, Jackson et al. (2013) analyze both, completeness and accuracy, for OSM data in Colorado.
Only very few studies deal with the attributive quality of OSM data. Ludwig et al. (2011) and Graser et al. (2013), for example, evaluate the attributive completness of selected attributes. But to my current knowledge, there is little more …

I must confess, that I’m not an OSM geek, not a heavy mapper. But I’ve worked in several projects with the data and got to know (and love) them; and of course I’ve contributed to OSM more than once. In a recent project my task was to model accross two data sets with different data models and attribute structures and use the modeling results as inputs in a network analysis. One of this data set was an OSM extract for 5 municipalities in the Austrian-Bavarian boarder region. During my work I learned to deal with at least three issues concerning the attributive quality of OSM data:

  • Attribute gaps
  • Inconsistencies and errors
  • Heterogeneous attribute structure

Some of my lessons learned will be presented at this year’s AGIT internet conference (this is an explicit invitation to all German speaking readers for this nice conference!!!). Since the conference language will be German, I’ll publish an excerpt of my paper here. Today I want to focus on the first implication, attribute gaps, and how to deal with it in spatial analyses, such as routing.

The road network I’ve worked with has a total length of roughly 1,000 km with slightly more than 5,000 ways in OSM (for analysis purpose I processed the data, which is irrelevant for the following considerations). Compared to other data sets (commercial and authoritative) of this region, OSM can be seen as the must up-to-date and the most complete in terms of existing ways. But when it comes to attributes (tags), the OSM data set has several downsides.
Here is an overview of the completness of several tags which were important for my modeling:

osm_completeness

Completeness of selected attributes in OpenStreetMap. The road network (~ 1,000 km) is in light gray, ways with values for the respective keys are in dark grey.

The question for any modeling and/or analysis that builds on such data is how to deal with the attribute gaps.
Take for example the key “maxspeed”. This attribute is necessary for the calculation of the mean driving times for every way. If it’s not there, you can for example calculate a route, but not the total travel time. No maxspeed, no travel time? Not necessarily!

The OSM data set offers a whole bunch of different attributive information. And many of these attributes are functionally related. The road category determines the maximum speed to a certain degree etc. To illustrate such a functional relation imagine the following: If you have a way with highway = residential, the probability is very high that the maximum speed is not higher than 50 km/h due to traffic regulations. And so on.
Such functional relations in general allow for an estimation of missing attributive values. And for several analysis estimated values are sufficient. For example, if you calculate the total travel time for a route, it’s an estimation anyway.

So, how to estimate missing values for a whole data set? Here is an extract of the python script we used to estimate the maximum speed:

Python script for the estimation of missing maxspeed values based on functionally related attributes.

Python script for the estimation of missing maxspeed values based on functionally related attributes.

In analogy to this approach, nearly all attribute gaps (width, surface, tracktype etc.) can be closed, as long as enough functionally related attributes are in the database. Of course such an approach generates some errors and in the worst case gaps in the attributes might remain (here expressed by the value 999). But in general quite plausible results can be produced this way.

 

Do you know where it’s safe to cycle?

Several international, national and even local initiatives aim to reduce the number of road accidents. This ambition is prominently supported by the United Nations which proclaimed the “UN decade of action for road safety 2011-2020”. In its annual report internet on road safety – dedicated to support the UN decade of action – the World Health Organization (WHO) claims:

 "Policies to encourage walking and cycling need additional criteria to ensure the safety of these road users. [...] Promoting city cycling to reduce congestion cannot be encouraged if cyclists repeatedly find that their lanes cut across oncoming traffic." (p.30)

In order to follow this recommendation, all bicycle promotion strategies need to ensure high-performance – but above all – safe infrastructure and user-tailored information about safe bicycle connections. For both, infrastructure and information, a sound data basis is needed. Geographic information systems (GIS) can serve as powerful platforms for consolidating and compiling digital data about the road network and the whole road space. This data basis can then be used for advanced modelling and analysis purposes.
Starting point for any initative dedicated to improve bicycle safety is to assess the road network’s quality in terms of potential risks for bicyclists. Based on this status-quo analysis the existing infrastructure can be improved where it’s most needed and bicyclists can be informed about safe(r) routes.

The indicator-based assessment model is different to generally applied assessment approaches.

The indicator-based assessment model is different to generally applied assessment approaches. Click to enlarge.

There are at least three approaches for quality assessment (expert evaluation, analysis of accident locations, user feedback) but each of them has several drawbacks – I’ll come back to this in another post. Different to these approaches we have developed an assessment model where we make use of geospatial modelling power. Conceptually this model is quite simple as can be seen in the figure on the left. The basic idea is to identify those “indicators” which contribute to the potential safety risk for bicyclists, such as presence and design of bicycle infrastructure or motorized traffic load. These parameters are then weighted and compiled in a GIS-model.

For the identification of the indicators empirical studies are reviewed, experts and users are interviewed and accident reports are systematically analyzed. These sources also serve as proxy for the impact of every single indicator on the overall-risk, expressed as weight in the model. Depending on the environment (urban, rural), data availability or user’s preferences these weights can be easily adjusted. Finally all indicators with their respective weights are compiled in the indicator-based assessment model which can be applied to any road network. It calculates a dimensionless index value which expresses the suitability of every single road segment for bicyclists. Low values indicate a low safety risk and vice versa. Due to the linear design of the model indicators can be added or removed without affecting the model’s performance. Generally it can be said, that the more (non-redundant) indicators are used the better the explanation power of the model is.

Conceptual design of the indicator-based assessment model. The list of indicators serves as illustration and is not complete.

Conceptual design of the indicator-based assessment model. The list of indicators serves as illustration and is not complete.

The indicator-based assessment model can be applied in any GIS for the calculation of the index value on a road segment level. The computed result is then evaluated by experts and users. If necessary, the model can be iteratively adapted either on the level of the indicators or the weights.

Iterative workflow of model development, application, evaluation and calibration.

Iterative workflow of model development, application, evaluation and calibration.

Compared to alternative assessment routines, the advantages of the GIS-based modelling approach can be briefly summerized as following:

  • Transparency: the results of the assessment procedure can be traced back to the building blocks of the model; all parameters and weights are accessible – there’s no black box or subjective component (as e.g. in expert evaluations).
  • Comparability: the model is the same for the whole road network; thus the results, even on a segment level, can be easily compared.
  • Adaptability: due to the linear model design and the implementation of weights, the model can be adapted to any environment, data availability or user preferences. It is transferable and geographically scalable.
  • Reproducibility: Once the model is compiled it can be integrated in automatic assessment workflows. This allows for short update intervals and employment in simulation routines.

If you wonder what this modelling approach can be used for “in the real world of bicyclists”, have a look to a really nice web application: www.radlkarte.eu internet. This routing platform is actually based on the described model. It’s quite innovative for at least two reasons. First, it is exclusively designed for bicyclists and has never been a car navi … The calculation of safe routes (for legal reasons they are called “empfohlene Route” = recommended routes) is a big deal especially for kids, elderly people or families. Second, it successfully shows the applicability of a rather sophisticated GIS workflow: different data sets with different data models are combined (OpenStreetMap and authoritative data from the city administration), the routing works across a national boarder (Austria and Germany) and the architecture of the systems allows for further adaptions (it is e.g. planned to implement personalized routing information).

TL;DR

For the risk assessment of a whole road network, specifically for bicyclists, GIS facilitates pretty innovative solutions!