groat.nz

Digital Transformation, Sustainable Teams

Science Coding Conference 2017 Roundup - Day 1

On 1st & 2nd August 2017 over 50 Researchers and Software developers from Crown Research Institutes, Universities and Government agencies gathered together in Wellington for the second CRI Coding Conference - now renamed the NZ Science Coding Conference.

I’m Andrew Watkins and as General Manager of IT for NIWA and previous systems development manager it was my privilege to work with NeSI last year to create the first of these conferences. I felt that there was a real need to create a forum where the full range of software development activities that take place within science organisations could be represented and where we could promote high quality software engineering practices and tools to researchers and scientists who code.

This blog post is based on my closing speech at the conference and is a quick roundup of some of what we saw and learned.

Thanks to the organisers: Aleksandra, Rhiannon, Georgina from NeSI without whose planning and encouragement the event would not have happened. Also to Matthew Laurenson, Craig Stanton and others on the programme committee for selecting the presentations. We had an interesting mix of how-to technology, strategy and problem solving.

Thanks to NeSI for underwriting and to Massey University in central Wellington for hosting.

Peter Ellis

The day 1 keynote speech came from Peter Ellis - Currently at StatsNZ but until recently at MBIE.

Peter talked about the strategy and work involved in migrating the MBIE analytics teams from a point and click or MS Excel world to a code based environment, and then how they went on to build new higher level point and click tools. The overarching goal being to make accurate evidence available to policy makers, ministers & journalists.

Using Plato’s cave as an illustration Peter said “We are using science to try to understand what the reality is. But there is still a lot of illusion present.”

Strategically Peter needed to do two things:

  • Define what was good practice for high quality analytics team
  • Build skills and capability

Some of the immediate challenges were :

  • A lack of version control leading to difficulties in reproducibility
  • A lack of access to information for example being able to build appropriate indexes in source databases
  • Limited tool - e.g. spreadsheets.

There does exist a nominal pipeline for data analysis :

  • Source Survey data and transactional databases from multiple sources
  • processing and quality analysis - some domain specific
  • leading to cleaned, weighted, concorded microdata.
  • followed by aggregation
  • and finally reporting and visualisation

However, each of these stages would be manual and difficult to repeat accurately. So for example when creating a complex analysis model in Excel they would need to pull together sector data and present it in a way that makes sense to the public. It could take 18 months to gather the data, clean it up and generate the reports. The final model could not be easily updated and lacked documentation. It consisted of up to 30 linked spreadsheets managed by multiple different groups.

What was needed was a series of coding steps that aggregated the data and performed the analysis in a repeatable, revision controlled way so that updated reports could be generated whenever needed.

Drew Conway Data science Venn diagram

Peter Identified some Sector Trends - these were confirmed by many of the speakers.

  • Everything in code
  • Reproducible - code, data and environments.
  • Adaptive, using best in breed rather than big end to end Enterprise IT solutions
  • Focus on software engineering processes - referencing Joels 12 point test for software teams
    • Version controlled
    • Modular
    • Peer review / pair programming
    • Continuous integration
    • Test led development & Automated Tests
    • Integrated documentation process.

Given that much of this type of software engineering improvements pay off in the long run with accuracy, reproducibility and maintenance gains they also needed to identify some quick wins. Their first project was for a tourism data improvement programme. Which bypassed traditional governmental data sources and went directly to commercial credit card and bank data, coupled with modern software techniques they were able to quickly generate excellent interactive visual reports for policy makers.

Peter’s keys to success:

  • Build capability - both in data analysis teams and the wider researcher environment.
  • Generate workplace specific development environments, provide access to the tools and data required.
  • Use in house staff rather than external trainers. They are cheaper and the experience of training others is a valuable skill in its own right.
  • Create an atmosphere of continuous learning, improvement and change.
  • Don’t wait to be perfect

Whose who

In the second session we had representatives from the various organisations attending talk about their work and challenges.

Here are some of the common themes:

  • Professional software development teams are often called in very late in a project - when the researcher has completed data capture and analysis and finally realises that they need to build an online tool or visualisation. By which time most of the funding has run out.
  • There are often unreasonable expectations as to what operating an online system requires - such as assumption that a website once created can be left unmaintained and it will continue for the lifetime of the Internet.
  • There was some discussion on the challenge of how to get in front of scientists and engage early. Ideas include:
    • Networking and teamwork conversations. Show up at the water cooler.
    • Proactive training for researchers. e.g. running software carpentry courses.
  • Problem Valuing Skills
    • Science staff and professional IT staff have different incentives and performance metrics. Scientists may be measured on publications for example and time spent developing good quality code or effective tools may be undervalued.
      • Universities often use students and postgrads on development projects. This creates a built in high turnover making it difficult to maintain consistent development projects and code quality. There is an overhead of constantly retraining new staff and learning about the project.
      • A general undervaluing of the benefits of software engineering.
      • For all the talk of advanced technology a lot of benefit from moving up one step on the maturity model - e.g start using version control.
      • Some organisations are data rich but information poor, they have a multitude of data sets that are hard to discover, lack clear documentation and data models, lack processes and the means to analyse, and there is a lack of experienced data science capability.
  • Legacy code & Silos
    • Some organisations have very long running systems, e.g. Nationally Significant databases. These promote extreme conservatism and resistance to change. There is a requirement for long term statistics - whether for climate or job figures to be consistently produced the same way each year.
      • Older systems have often evolved over time and have no as built description or design documentation.
  • Labour costs massively outweigh compute equipment costs, yet the nature of IT budgets is that a researcher may have to work with a slow PC or wait weeks for access to online resources.
  • How easy is it for scientists to
    • Use supporting infrastructure (e.g SQL noSQL
    • Get the right sized resource
    • Get to and use existing building blocks
    • Use infrastructure in research that is virtually identical to production

Automation and standard tools - lots more mentions of docker than last year.

Talks and Presentations

Hillary Oliver - Cylc

Hillary Oliver from NIWA told us about Workflow automation with Cylc. A major open source tool that manages complex workflows that have dependencies, spread over multiple servers, and include multiply periodic activities. Cylc was developed to manage environmental forecasting models on NIWA’s HPC and is now used across the Unified Model consortium.

Written in Python, it uses a simple text based notation for generating workflows thus supporting revision control. A command line and API allow control of the workflow while it is in flight allowing users to start and stop processes, update dependencies and track progress, errors and success. It can even cope with entire servers being restarted in the middle of the workflow.

See cylc.github.io

Boosting R with Parallel Fortran and C. Wolfgang Hayek, NeSI/NIWA

Wolfgang talked about how NeSI makes available to researchers consultancy and support in optimising codes and helping systems make use of parallel processing capabilities.

The statistical programming language R is widely used in science across NZ. While it does support parallel programming through the parallel package this operates by spawning new processes for each thread. This approach takes time and extra compute resources and is disliked by HPC admins due to the high process count that results.

An alternative approach is to make use of R’s ability to move code in to supporting C or Fortran libraries. By placing the parallel code into these libraries developers can make full use of tools such as OpenMPI to support multicore processors and OpenACC to support GPUs

The basic steps are:

  • Identify compute intensive parts of the code using a profiler e.g. Rprof or callgrind.
  • Identify compute intensive functions or loops with large trip count.
  • moved these calculations to the external library.

In the example given moving just a few core routines into parallel libraries resulted in order of magnitude speed ups in model run times.

NeSI consultancy services

An application of linear indexing to solve large combinatorial problems - Robbie Price, Landcare

Robbie introduced us to a fascinating corner of spatial analysis. Landcare works with many huge raster (gridded) data sets such as soil and land use maps. Typically these are manipulated within a GIS tool such as ArcGIS or QGIS.

One activity is combinatorial analysis a sort of ‘SELECT DISTINCT’ for raster data. While this function is available in GIS tools it may be limited to a certain number of layers e.g 20. While the data in question may have scores of layers.

Robbie worked in Python using the Anaconda suite and in particular GDAL and KEA.

Now I got a bit fuzzy on the details here but Robbie used a technique called Linear Indexing to effectively generate a constant time look up table for the combined raster data. This has the effect of reducing the data volume with no loss of fidelity and allows the analyst to answer simple questions about the data set - such as how many types of X are there, very quickly.

The new algorithm runs 100 times faster than the GIS algorithm and is now only limited in capacity by the characteristics of the storage file format.

Research development and production of weather forecasts at MetService - Andy Zeigler, Met Service

Andy kicked off by talking about the phases of crafting research software:

  • Experimental phase
    • exploratory and playful
  • Development phase
    • stabilising and repeatable
  • Operational phase
    • Scalable, updates never break
    • Accessible and it works continuously

There is a challenge for traditional IT groups operating in a research organisation in how best to both enabler for research and development as well as support sophisticated operational systems.

A big question is how easy is it for scientists to:

  • make use of existing supporting infrastructure (e.g SQL noSQL databases),
  • get access to the right sized resources - compute, storage etc.,
  • get access to and make use of previously created building blocks - reuse,
  • use infrastructure in research that is close to production systems in order to allow rapid productionisation of experimental systems.

In particular, given that labour costs, that is, scientist time, largely outweigh equipment and compute costs, what element of core productivity are we aiming to optimise - capital or time?

Producing good quality on-premise systems along with maintenance costs is slow and difficult. Problems multiply when systems require different versions of libraries, environments, platforms, and use different processes and conventions for creation and deployment.

By using common platforms and environments the cost and performance of production solutions can be predicted during the development phase.

MetService approach:

  • Radical use of cloud platform as a service (PaaS) e.g. AWS.
  • Give access to as much compute as required. that is to complete a model run scale out on many servers for a short time rather than run a few big servers for a long time. Optimise time to completion.
  • give access to lots of other managed services e.g databases, load balancing, domains etc.
  • Cost effective for research
  • Scalable for production
  • Low upfront cost
  • Resilient around the world

  • Use standardised language, package design and deployment patterns:

    • Subversion and GitHub
    • Peer review on merge to master
    • Python and conda
    • Julia and Docker
    • Jenkins CI/CD. Span from research to production.
  • CI - Operational deployment

    • Triggered from tagged release
    • If build, test etc ok then pushed to prod
    • Shut down Jenkins every night, restart every morning. Clean Setup.
    • Test complex software in parallel to production before switch.

    Don’t nurse your infrastructure nurse your productivity and efficiency
    Nurse simplicity and repeatability for scale.
    Infrastructure is a means to optimise outcome.
    Select the right shoebox for your problem, not make your problem fit the box.

  • Final thoughts

    • Automate, Automate, Automate, If you can’t then don’t think about using cloud
    • No one off experiments
    • Integrate roles Science, Development, Operations => SciDevOps
    • Infrastructure as code, software all the way down.

DIY IT using AWS to support chaotic development and concepts - Andre Geldenhuis, Victoria University

Andre occupies a unique role at Victoria - a member of IT whose job it is to spend time sitting next to researchers and working to provide them with the technology tools they need to be productive.

In the demo Andre built a complete research sandbox in 20 minutes using AWS servers and ansible scripts to create a Jupyter Hub server complete with domain name and IP address.

Steps were:

  • create an AWS small instance - Ubuntu server
  • use Elastic IP to give it a fixed IP
  • use www.dot.tk get a free domain name
  • create SSL Cert to allow script access
  • Ansible installed locally.

Then use the scripts at https://github.com/jupyterhub/jupyterhub-deploy-teaching
All the instructions are there.

The created system includes OAuth user authentication so you can control who gets access to the services.

I recommend starting at the github site and working through the instructions.
Note that you will end up using some paid for resources on AWS.

Modern Javascript Development, Scaffolding, Tools Testing - Uwe Duesing, NIWA :

Uwe demonstrated how modern languages and frameworks now typically come with tools that generate new projects according to best practices, that are setup ready to commit into git, deploy to servers, run auto tests etc.

This is a major shift from the days when many novices would create new projects based on template code abstracted from how-to’s and tutorials. These projects often would be configured for simplicity and getting started rather than generating a production ready system.

Uwe demonstrated Angular in conjunction with a node.js package Angular-cli a command line interface for generating and managing angular projects.

Although hampered rather by the VGA projector, Uwe was able to live demo installation of the tool chain and generation of a first project. This project comes ready configure with end to end testing and unit testing capabilities so you can start writing fully tested applications from day one - by far the easiest way.

Again I recommend working through the readme to get a project started. Then perhaps watch this course on youtube for a crash course in what a javascript framework can do for front end development.

The day ended with drinks, nibbles and socialising.

Science Coding Conference 2017 Roundup - Day 2

Alys Brett - Research Software Engineers

Day 2 started with an experiment - a keynote address from someone on the other side of the world.
Alys Brett is from the Culham Centre for Fusion Energy in Abingdon, Oxfordshire, England. She is a Software Team leader and Joint Chair of UK Research Software Engineer Association RSEA

Rather than risk the entire talk to the Zoom video conferencing system Alys recorded a video of the main part of her talk which we played to the audience, followed by a live Q&A through the conferencing system. This worked well - perhaps everyone should record their presentations.

At Culham Alys has been collaborating on research code for the Fusion Energy Joint European Torus (JET) Data pipeline. This includes stages of:

  • Store raw data
  • Processing
  • Discovery
  • Access - via API to abstract away underlying storage
  • Analysis
  • Visualise

Research software (like most other) has these goals:

  • Reusable
  • Readable
  • Reliable

Within the research organisation there can be structural and cultural barriers to good software engineering - for example whose job is it to focus on quality when career structure and research metrics actively penalise researchers who spend time on code quality.

Introducing the Research Software Engineer (RSE) someone who combines SE skills with research skills.

They can be found in:

  • Research IT depts
  • HPC centres
  • National labs and large facilities
  • or can simply be the post-doc everyone turns to for help.

One difficulty can be recognising the role for example in a recent survey of 10,000 academic jobs advertised in the UK they found about 400 were software related or had a software development component. That’s good - but they also found 194 different job titles!

A brief history:

  • 2010 - Software sustainability institute. - 4 uk unis, better software, better research
  • 2014 - UK RSE association founded. Community to represent. Elected Committee.
  • 2015 - UK Funding agency, Dedicated RSE fellowships. Designed to kickstart establishiment of RSE software groups. 7 awarded. Growth to 800 members. Employed a part time coordinator, created an RSE leaders forum to share knowledge and experience.
  • 2016 - first RSE Conference. Current committee elected at the conference. Created a new website http://rse.ac.uk/ and slack channels. http://rse.ac.uk/slack/

What do RSEs do?

  • Advising on tech
  • Architecture and co-design
  • Refactoring
  • Optimising and performance
  • Porting
  • Scaling, cloud HPC
  • Workflows and dev practices
  • Advising on project structure
  • Enabling open science - reproducibility

Skills

  • Communication
  • Programming
  • Bridget between groups.
  • Technical software
  • Patience

Problems

  • Only 11% RSE female. In UK most RSEs come from Physics & Math backgrounds which is male skewed. Care is needed in job descriptions to avoid implicit bias.
  • ‘Bus Factor’ => how many developers have to be hit by a bus before a project fails. In a survey they found that 46% of projects had bus factor of 1, that is a single person working on the project.

Future Growth:

  • There are now RSE orgs in Germany and the Netherlands
  • There is talk of a NZ and AU organisation and Aleksandra Pawlik at NeSI is working with the start-up group.

Q&A comments and observations:

  • WRT low bus factors: Many SE techniques are designed to limit the risk. Defence comes from pair programming, peer review, code repositories, automated test, while standard build and deployment tools across projects help new developers to pick up a project quickly.
  • Career paths for RSEs. Are there groups trying to define a career structure? - where to go after RSE? will there be a Research BA or Research Systems Architect?
  • Interested in the reward patterns for these roles. is there a credit taxonomy or recognition metrics? Currently to get recognition of contribution to research projects one has to write paper about the software and get cited, but would be better to get Software and Data to be first class research products.
  • RSE tend to have a generalist mindset, there is commonly an ability to balance - not wanting to be hard core academics or pure software engineers.

Finlay Thompson - Dragonfly data science

Finlay introduced us to Stencila. https://stenci.la/. a client application that works like a word processor but allows the inclusion of code and data to generate interactive and interrogatable documents. This is similar in some ways to Jupyter notebooks but the focus is more towards a document with live code than an annotated program.

Stencila was created to make it easier to collaborate between coders and clickers. The focus on supporting reproducible research.

It is based on an underling text document format (markdown) with visual javscript editor (think Atom) that contains embedded code and data blocks, which can be executed dynamically against a range of language backends. e.g R + Python. The system uses dockerfiles to generate the server back end.

Created by Nokome Bentley here in NZ, has been funded by the Alfred P Sloan foundation and is now supported by developer teams around the world.

SE Essentials

  • Version control used properly, branches, pulls, code reviews, tags etc.
  • CI all the time, no builds/deploys without VCS
  • Tagged builds intrinsically reproducible at any date.
  • Enhances collaboration and report writing context.
  • Containerisation. Docker use systematically everywhere.

Example:
Python library - nzelect, nzcensus. On github
https://github.com/ellisp/nzelect

DockerFile - Software Environment as Code.
Stencila/alpha - full dev environment.

Coder writes docker file to contain all the code data and libraries.
Containerise.
Write presentation documents

Install upstream libraries in the dockerfile. Author needs to build dockerfile on their computer and upload then CI system will build docker image.

I think that this type of application absolutely requires secure containers running at the back end otherwise each editor instance becomes a great way to deliver remote code execution code on your server. :)

First steps in creating 3D Vis using Python & VTK - Alexander Pletzer, NeSI

Code for this demo at https://github.com/pletzer/firstStepsInViz
There’s not much to say - go work through the demo and examples and have some fun with 3D models plotting data.

Creating the chaotic sandpit, giving corporate IT the right tools and freedom to support science. - Jonathon Flutey, (Johnny) Victoria University, Wellington

Traditional core IT has key drivers of stability and security along with limited resources. This tends to result in systems that are locked down, not flexible, and not available when needed.

Victoria Uni created a Vision and strategy for Digital learning and Teaching 2012 to 2017

They measured themselves against a Maturity Model:

  • Mobility
  • Support
  • Data
  • Policy
  • Processes
  • Tools
  • Collaboration
  • Computational
    to identify weak points.

They developed an IT / Researcher engagement strategy:

  • Hire in disciplinary knowledge - digital humanities, research knowledge
  • Embed staff inside the research lifecycle and understand the process
  • React - on demand as soon as possible, delays stop research
  • Hire staff not tied to ITS architecture, services, support or policy - especially security.
  • PD you can do whatever you want - trust the tech

Which led to the recruitment of staff like Andre who have the time and role to sit and work with researchers to identify and deliver their technology tools and processes while bringing in best practices.

They identified three levels of platform sophistication and formality:

  • Chaotic Sandpit:

    • On demand, breakable, anything goes, no security or architecture
      • Unrestricted
      • Isolated from important systems
      • Not very supported - If something stops working the IT pager doesn’t have to go off.
      • Researcher led - some technology understanding
      • AWS, Azure, Catalyst Cloud, Docker
      • Prototype, Proof of concept, testing, hacking,
      • available in hours, short life cycle - shorter than OS patch cycles.
  • Humming Hothouse

    • Evaluation, assessment, pilot, transitioning
    • collaboration between research and formal IT
    • AWS, Azure, Catalyst Cloud, Docker, private cloud
    • live testing, scaling up, persistent storage, privacy
    • available in days, months life cycle - project scale
    • Experimental
    • But may have privacy and security concerns so more care required
  • Disciplined Engineroom

    • formal delivery teams
    • ITIL
    • external support, licenced software
    • commercialisation
    • SLAs
    • Azure, VMWare, Physical SOE servers
    • security
    • built for production - weeks?
    • long term persistence

May need to put containers into long term digital preservation system.

Introduction to International Image Interoperability Framework IIIF - Bruno Kinoshita. NIWA

Everyone can get involved in Open Source projects. They are a good place to learn new code and techniques outside of the constraints of work. Bruno got involved in the IIIF project after a pub conversation.

Website: http://iiif.io/
Github: https://github.com/iiif

What is IIIF?

  • A Group of standards for sharing and reusing images
  • very high res images
  • Metadata enriched images
  • Supports Annotations and linked data
  • Search tools

Think Google Map tiles rather than PNG files.

Features:

  • really large images with deep zoom capability
  • select by URL region of interest, rotation, quality etc.
  • can cite annotate & share subsets of the image.
  • supports user authentication for access control
  • open standard - avoid vendor lock-in
  • allows combining content across multiple repositories
  • on the fly thumbnails

Implementations:

Follow Bruno on twitter at @kinow

Applying good software engineering techniques to optimising HPC code. - Chris Scott

In another example of the NeSI consultancy services Chris Scott talked about how some simple optimisations to a complex computational model resulted in significant performance improvements.

Model TopNet is a catchment water-balance calculator. By running profiling tools on the system Chris was able to identify some key bottleneck in writing checkpoint files out to NetCDF files on disk. By re-writing to move from line by line writes to larger blocks the bottleneck went away along with the pain the system was causing the HPC disk subsystems.

This was not a particularly complex optimisation the key there is simply having access to someone who can focus on the how a thing runs - rather than the researcher who focuses on the internal logic. By adding tests and profiling tools it becomes easier not only to improve the performance but to help the researchers avoid future such bottlenecks.

Closing

Nick Jones from NeSI gave a glimpse of what’s coming soon for NZ researchers and developers in the form of new supercomputers. Rather than repeat it here take a look at https://www.nesi.org.nz/news/2017/06/new-computing-platform-power-nz-research

Finally I closed the conference with a brief summary of the above and some new things I needed to go home and investigate:
New terms

  • Semantic versioning
  • Open annotation
  • www.dot.tk. Free domain names for all.
  • Bus Factor
  • SciDevOps
  • and a list of github repos to fork.

After a show of hands it looks like everyone enjoyed their time at the conference, made new friends and learned new stuff. They want to do it again, so do I so we will be back next year with SciCoCon 2018. or perhaps the first NZ RSEA conference.

Science Coders Conference 2017

The 2017 Science Coding Conference will be held from Tuesday August 1 to Wednesday August 2 at the Massey University Campus in central Wellington.

For 2017, we have changed the name from CRI Coding Conference to Science Coding Conference (SciCo!) to reflect the broadening audience of this event.

This event is for anyone involved in the programming side of NZ research. We encourage all research software engineers, IT managers, researchers and operational software developers from NZ Crown Research Institutes, universities and other public sector organisations to attend the conference.

Where: Tussock Venue, Massey University
Tasman Street, Entrance E,
Wellington Central, Wellington 6021

I’ll be tweeting #SciCoCon2017

This will be attended by both pro software developers and science coders and it’s an opportunity to meet peers and talk about tools, techniques, processes and challenges associated with developing robust scientific code and applications.

A New Hope

After 9 years at NIWA I have decided to hang out my shingle as an independent IT consultant.

This gives me more time to work on projects of personal interest and allows me to engage with a broader range of organisations where my leadership and software management skills can help to build and scale development teams, create processes and tools, raise overall production quality, and improve time to delivery while not burning out your team.

I am an experienced information technologist with a practical grasp of business; a strategic creative thinker and leader.

I have over 30 years of Software Development, Systems Architecture Design and Team Leadership across a range of industries including Health, Automotive, Mobile Phone, Environmental Science and Industrial Control. This includes operating my own company; leading a major start-up mobile phone company software team through a high growth period; and an executive CIO role responsible for 38 IT staff and $12m budget.

I have a clear track record as an innovative systems designer and project manager taking ideas from conception through implementation and delivery and into operation through a full business-process-aligned life cycle, managing teams using agile methodologies, leading edge tools and technologies.

Andrew talking about NIWA Systems architecture

NiwaWeather

Designed for embedding.

In 2013 over a 6 week period the NIWA Systems Development Team rapidly built a weather website to demonstrate how NIWA’s EcoConnect web services can be made available as visual components. The result of this is the rather stylish http://weather.niwa.co.nz

The page provides an ‘at a glance’ summary of the weather for the day. You can tell the mood of the day, and you can easily spot those windows of opportunity – a dry spell or rain showers. The design is intentionally impressionistic rather than detailed.

The application uses a PHP microservice to aggregate several EcoConnect API calls for different data products into a smaller JSON response giving the forecast for the requested location. The front end is written in Javascript with D3 generating the SVG graphics. The experience of writing a fairly complex javascript application here led us to investigate Javascript frameworks and eventually adopt Angular for future projects.

The interactive website is great and I’ll hope that you will bookmark your local page. But the code is also designed to support embedding of the forecasts in other websites.

Like this for Auckland:

And this for Dunedin:

The key is to add /kiosk after the location in the URL. This gives you just the interactive graphical elements and leaves off the rest of the website content and chrome.

If you do use this in your website be sure to give credit to NIWA and provide a link to the main weather site: http://weather.niwa.co.nz. Also be sure to read the DISCLAIMER

If you are running a commercial site then you will need to get an agreement from NIWA and they might want to charge you if traffic is high.

We can also do special locations – NIWA installed a local weather station at Mystery Creek for fieldays 2013 where the site was launched. The NiwaWeather kiosk appeared on the Giant display screens, on stand display and on the fieldays website .

API

The Server

The service url is http://weather.niwa.co.nz
The full path is: http://weather.niwa.co.nz/{locationList}/{displayMode}/{trackingMode}/{extras}

Location list

The first parameter is one or more location names. The forecast is repeated for each location given in a comma separated list.

  1. single location e..g Auckland
  2. Names with spaces use underscores e.g. Mystery_Creek
  3. Multiple locations e.g Auckland,Wellington – shows a list of panels.
    The stations list is available at /stationsAvailable this returns a javascript variable containing the stations list.
    Example of http://weather.niwa.co.nz/Auckland,Christchurch,Timaru/kiosk

Mode

The mode parameter controls what is on the page and the number of hours showing:

  • weather (default) – shows the title bar with the previous locations, the hours, days, main page body text and footer
  • kiosk – shows hours and days only
  • ribbon – shows hours only with 144 hours showing – for testing purposes only.

Track

The track parameter controls how the page is presented and updates over time.

  • hour – keep the infoText in the left hand column and move the day left once each hour. Default for weather mode
  • day – centre the day in the window and track the infoText across the page. Default for kiosk mode

Extra features

Controls the appearance of extra features. You can add multiple extras with a comma separated list.

  • textOnly – replace the hours and days SVG graphics with a pure HTML table.
  • temperatureChart – display a temperature chart on the hours graphic.
  • minMaxTemperature – display min and max daily temperatures on the hours graphic. (can be combined with temperatureChart).

If we detect that your browser does not support SVG we will switch to textOnly automatically.

Other URLS

/about – Shows the about page.
/weathermap/rain – Shows the rainfall map

This is a reprint from an older blog posting as I wanted to keep the API documentation available.