During a recent workshop on the fringe of this year’s SatSummit, participants discussed how to design APIs that simplify ordering satellite data. Matthew Hanson wrote a summary of the workshop, noting the complexity of decision-making that goes into ordering data and tasking a satellite; arguably one reason why we haven’t seen a production-ready ordering API so far:
It turned out the most interesting discussions were centered around tasking as a process, rather than the details of a transactional API with a data provider. Tasking is really about the negotiation, as Phil Varner (Element 84) put it: a user says “This is what I want” and the provider responds with “This is what I can offer”. The questions that arose were less about detail and more about how users should interact with the provider. How do users want to discover what is feasible? How do they evaluate multiple possible options and request one or more of those options?
And consequently, how the ordering APIs could be designed:
There was a general consensus that users start by making a “feasibility request. Included in the request is usually a spatial Area Of Interest (AOI) and a date/time range, Time of Interest (TOI), and possibly some additional parameters constraining the options. What is returned by the provider is a list of possible results that may vary by total area of coverage, time of acquisition, price, resolution, sun angle, or by virtually any collection parameter.
Rather than the provider trying to make a decision of what the user wants from the available options, this choice should be pushed back to the user.
The user then gets to pick their preferred options and places the order for the product best suited for their needs.
Detailed notes of the event are on GitHub, providing some early and still rough outlines of potential API states and parameters, amongst insights on more high-level discussions.
There’s a new Carpentries-style lesson teaching the fundamentals of processing geospatial raster and vector data with Python. It teaches the basics of vector and raster data, how to access raster data via STAC, how to do calculations on raster data, and parallelisation with Dask.
The course is designed for in-person workshops, but you can easily follow the instructions at home.
Craig Kochis has got you covered if you want to learn how to build a WebGL map application without any libraries. It’s a very detailed post that covers the basics of WebGL for maps and rendering vector tiles and also looks at different ways to make interactions like zooming feel more natural and performant.
We’ve been able to make web maps with GeoJSON data for some time now, and converting GeoJSON to GeoArrow and preparing the data for deck.gl requires extra development work, so why would you want to use GeoArrow? The short answer: It’s incredibly fast.
GeoArrow overlaps almost exactly with the format that deck.gl expects! So deck.gl can render from GeoArrow’s physical representation very efficiently. For point and linestring geometry types, the underlying coordinates array can essentially be copied directly to the GPU with no CPU processing required. For polygon geometries, only polygon tessellation still needs to happen on the CPU.
We’re looking at the not-so-distant future of web mapping here, when we can render millions of features onto a web map without a noticeable impact on performance.
Kyle Onda with a concise overview of file-based and web-API vector data formats. The post looks specifically at the applicability of sharing water data, but the conclusions can easily be transferred to other domains.
Detailed editing in OpenStreetMap, adding buildings, turn restrictions, or street crossings can be laborious and time-consuming. But more data and newer tools are available to assist armchair mapping from the comfort of your home:
Mapillary provides street-level imagery and point data extracted from the images, which you can use to guide editing in popular editors like iD or JSOM.
RapiD, an extended version of OpenStreetMap’s default iD editor, provides additional datasets from Microsoft, Esri, FacebookMeta and functionality to integrate the data into OpenStreetMap.
Open Mapping Hubs and Meta recently hosted an online workshop introducing how to use Mapillary and RapiD to edit OpenStreetMap, and the recording is available on YouTube.
Both helpers come with caveats. During my very unscientific review (I checked a few neighbourhoods around the world that I’m familiar with), I noticed that Mapillary images can be pretty outdated – most images I saw were from 2019 or earlier, some even from 2014. And for RapiD, the OpenStreetMap Wiki includes a big banner saying that every edit must be reviewed individually, otherwise the modifications are considered an import.
Christopher Beddow takes an in-depth look at Visual Positioning Systems (VPS), the solutions companies like Google, Niantic, or Snap have built, and what possibilities the technology opens.
VPS is naturally associated with Augmented Reality (AR), because of the way it enables AR services. It serves as one of several bridges between the more legacy geospatial topics like maps, data, location, and the world building that demands more than legacy systems typically offer.
Advancements in alternative positioning technologies seem to rekindle the hype around augmented reality. So far VPS is mainly used with video games and in product demonstrations of navigation technology, but I haven’t seen any applications of augmented reality beyond that.
Giles van Gruisen explains the underlying concepts of Felt, and more generally web maps, a tad downplaying the complexity involved:
Don’t worry if these concepts are a bit confusing at first, this stuff is tricky!
That’s one way to put it, considering what is involved in making zooming and panning performant interactions:
Specifically, when the user starts any gesture that might affect the viewport position, we immediately take note of the original viewport state. Then, as we calculate the new viewport position, we can easily derive a transformation that we can use to translate and scale existing element geometries to their new positions and sizes. So, rather than continuously projecting every last coordinate pair on every single frame, we calculate a single “viewport transformation” that can be applied to all of them.
To take it a step further, we don’t actually need to apply that transformation to every single element individually, but rather to a single parent layer that contains all of the shapes as children. This results in a single compositing layer being translated and scaled, and makes for highly efficient zoom and pan gestures. Finally, when the user stops their gesture, we do actually recalculate the projected position of each coordinate pair, and in a single frame swap the old geometries for the new, and remove the temporary transformation.
Whenever I read an article like this, I feel grateful for anyone building and maintaining map libraries and applications. This is complicated stuff, and seeing how easy it is to put a map on the Web these days makes you realise how much thought and work goes into these solutions.
I’m forced to clarify this because we are getting more and more support requests from people who have been mislead by YouTube tutorials that a simple python script can be used to determine the location of any phone simply by entering the phone number.
Chris Holmes wrote an excellent summary of the Cloud-Native Geospatial Outreach Event, which took place in April and gathered people working with new cloud-native geo-data formats and APIs, like COG, Zarr, STAC, or COPC. Chris highlights selected talks to get you started with the formats, how organisations adopt them, and tutorials going deeper into technical details.