MPC Seminar 8 dec'15 Abstracts and presentations of the seminar 'Management of massive point cloud data: wet and dry (2)', 8 December 2015, 10-17 hours, TU Delft, Berlage rooms, Faculty of Architecture and the Built Environment (Julianalaan 134, 2628 BL Delft). In cooperation with: NCG (Nederlands Centrum voor Geodesie en geo-informatie), OGh (Oracle Gebruikersclub Holland), TU Delft and the Netherlands eScience Center. Click on the name of the presenter to download the pdf of the presentation. Peter van Oosterom (TU Delft) Welcome, overview programme, launch of OpenPointCloudMap, propositions, and concluding remarks ... for the seminar 'Management of massive point cloud data: wet and dry (2)', 8 December 2015, 10-17 hours, TU Delft, Berlage rooms, Faculty of Architecture and the Built Environment (Julianalaan 134, 2628 BL Delft). In cooperation with: NCG (Nederlands Centrum voor Geodesie en geo-informatie), OGh (Oracle Gebruikersclub Holland), TU Delft and the Netherlands eScience Center. Rogier Broekman (Hydrografische Dienst) From point cloud to bathymetric digital elevation model The Dutch Navy is responsible for the creation and publication of nautical charts in the Dutch Continental Zone of the North Sea. For safety of navigation, these charts need up-to-date information on the shallowest depth in the area, obstructions at the sea bottom and depth contour lines as guidance for the mariner. The Dutch Navy utilises two hydrographic vessels to collect Multibeam Echosounder data in an area of 59.000 km2, 1.5 times the size of the country. Rogier Broekman (Dutch Navy) and Niels Nijhuis (CARIS) will show how the data is taken from the multibeam to a bathymetric digital elevation model of the North Sea. Both the point clouds and DEMs are stored in a format that can handle and visualise these large volumes. Further development to the CSAR format and a new modelling method overcome limitations of single-resolution grids and Triangulated Irregular Networks (TINs) are described. The Variable Resolution surface modelling concept will be explained and the visualization demonstrated. Niels Nijhuis (Caris) From point cloud to bathymetric digital elevation model Part b (see 01a for description) Wilbert Brink (Fugro) Overview of techniques to collect subsea point cloud data The act of taking manual depth measurements at sea to create bathymetric maps is as old as ancient history. The introduction of the acoustic echo sounder in the early 20th century resulted in more accurate and above all more densely populated point clouds of the seabed, especially after the transition from singlebeam to multibeam echo sounders. Modern day multibeam systems determine hundreds of sounding points and that tens of times per second. Acoustic waves have traditionally been used because of their superior penetration capacities in water when compared to electromagnetic waves. The visible spectrum, however, has a relatively low absorption factor as well and can be used under water. Standard photogrammetric methods, as well as laser profiling and LiDAR provide point clouds with an even higher point cloud density than the latest multibeam systems. Besides this, another advantage of using vision techniques is that the raw data is very understandable for humans: our brain is much better trained to interpret visual information then acoustic information. Edward Verbree (TU Delft) Connecting indoor and outdoor - Insight through explorative point clouds (MSc Geomatics Synthesis project) In current processes, the steps to be taken and the throughput time of data acquisition, processing, modelling, analysis and visualisation are driven by the information model and the user requirements. Among other issues, at the final stage of this process some of the inherent available information from the acquired data is not taken into account or lost. We use the term 'explorative' to indicate the unrevealed possibilities of direct use of point clouds in this geo-processing chain. These point clouds (massive sets of X,Y,Z coordinates and attribute values) are the connecting elements between data acquisition and information retrieval. Thus, the acquired point clouds themselves have to processed, analysed and visualised as much and direct as possible to expose information to all kinds of users. With this mindset, 15 students of the MSc Geomatics have conducted their Synthesis project in which they combine the knowledge, skills and insight of their core Geomatics courses while developing their project management skills. The full indoor environment of the 'Bouwpub' of the faculty of Architecture and the Built Environment has been scanned by the Zeb1 mobile laserscanner, another huge point cloud of the exterior of this 'Bouwpub' has been obtained from 1132 dronelike aerial images, and a massive dense matched point cloud of the full building and surroundings of the Architecture faculty has been obtained by the latest Cyclomedia recording system. These three different point clouds have been processed directly to derive the 'pointless' navigable space, to link the exterior points to the recorded images, and to classify roads. This presentation will demonstrate the obtained results of connected indoor and outdoor environments as key examples in getting insight through explorative point clouds. Romulo Goncalves (NL eScience Center), Kostis Kyzirakos (CWI) and Dimitar Nedev (MonetDB Solutions): LiDAR data exploration boosted by a column-store Currently large data sets, such as country wise LiDAR scans, are being collected and combined with large collections of semantically rich objects to form a new source of knowledge for modern risk management systems. To integrate different data sets with spatial data, and to have efficient and flexible data exploration a Spatial Data Management System (SDBMS) is advised. However, current solutions are not capable of handling efficiently large LiDAR data sets due to the high cost of converting, loading, indexing and compressing point cloud data [1]. In this talk we present an efficient data management layer for geo-spatial data analysis with special emphasis on LiDAR data. The advantage of this approach is that, unlike previous solutions, it stores the raw data sets, and transforms, combines and processes them only when needed, thereby vastly improving flexibility and performance. [1] P. van Oosterom, O. Martinez-Rubi et al. Massive point cloud data management: design, implementation and execution of a point cloud benchmark. Computer Graphics, 2015. Albert Godfrind and Mike Horhammer (Oracle) Oracle support options for point clouds This presentation introduces the managing of large geographic data sets in Oracle databases. Specifically, we will present and compare several alternative storage models: blocked, flat, and hybrid, with their benefits relative to different applications, with a focus on performance and scalability. We will give an overview of currently available point cloud functionality: the creation, loading, compression and blocking into the various models as well as the functions to perform range selection and analysis: clipping, finding nearest neighbours, contouring, etc. Theo Tijssen (TU Delft) Point cloud data management benchmark: Oracle, PostgreSQL, MonetDB, and LAStools LiDAR, photogrammetry, and various other survey technologies enable the collection of massive point clouds. Faced with hundreds of billions or trillions of points the traditional solutions for handling point clouds usually underperform even for classical loading and retrieving operations. To obtain insight in the features affecting performance the researchers involved in the Massive Point Cloud for eSciences project (http://pointclouds.nl) carried out single-user tests with different storage models on various systems, including Oracle Spatial and Graph, PostgreSQL-PostGIS, MonetDB and LAStools (during the second half of 2014). In the summer of 2015, the tests were further extended with the latest developments of the systems, including the new version of Point Data Abstraction Library (PDAL) with efficient compression. Web services based on point cloud data are becoming popular and they have requirements that most of the available point cloud data management systems cannot fulfil. This means that specific custom-made solutions are constructed. We identify the requirements of these web services and propose a realistic benchmark extension, including multi-user and Level-of-Detail queries. This helps in defining the future lines of work for more generic point cloud data management systems, supporting such increasingly demanding web services. Oscar Martinez Rubi (NL eScience Center) The AHN2 3D web viewer and download tool As part of the Massive Point Cloud for eSciences project (http://pointclouds.nl) we have designed and implemented a divide and conquer algorithm for the creation of multi-resolution data structures used for the web visualization of massive point cloud data sets such as the AHN2 data set with 640 billion points. Thanks to this algorithm the Netherlands is the first country to have a point cloud of all its surface to be freely visualized over the web (http://ahn2.pointclouds.nl). In this talk we will describe the algorithm and we will present the created web service and its features with special focus on a novel multi-resolution download tool that allows the users to download, for further analysis, point cloud data of any selected area. Martin Kodde (Fugro) Massive point cloud processing in the cloud Point clouds are increasingly the fundamental data source for spatial analysis. The acquisition of point cloud data is becoming easier and more cost effective. Hardware costs are going down and new developments such as dense matching allow for new data sources of point clouds. Inevitably, this will further increase the availability of massive point clouds. These massive point clouds do not only have to be nationwide data sets. They can be very dense local data sets, or point clouds with a high temporal resolution. In this presentation, we will show cases of such massive point clouds in two very different domains: railways and petro-chemical industry. But while the availability of massive point clouds may increase, they do not directly tie-in to the questions required by end users such as consultants, engineers or policy makers. In particular, the time from data acquisition to end user is markedly long. As massive point clouds do no longer fit in a desktop paradigm, this requires cloud based technology. Two aspects are essential: cloud based visualization and cloud based processing. Great steps have been made towards visualization in 2015. Challenges still remain in the processing of data in the cloud. During the presentation, we will highlight what can be done using cloud technology, and which challenges still remain. As with many innovations, it turns out that with massive point clouds, technology, infrastructure and standards have to go hand-in-hand. George Vosselman (University of Twente) Automated extraction of 3D building models and street furniture from point clouds Point clouds are an important data source for the production of 3D city models. In recent years a balance has been found between data and model driven approaches in the use of roof topology graphs that model the topology of roof faces of small building parts. As not all topological relationships can be extracted from the point clouds immediately, the roof topology graphs may contain errors. Research has been conducted to automatically detect, recognize and correct errors in topology graphs and the reconstructed 3D models. The developments led to an increase in the automation rate of 3D building models from 80% to 95%. Street furniture can be captured well by mobile laser scanners. In the point clouds street furniture is detected by first removing points on the ground and then clustering the remaining above ground points. Classification algorithms then try to determine the most like type of street furniture (street lights, traffic signs, traffic lights). To improve current classification success rates research is conducted to segment the street furniture into characteristic elements. Xuefeng Guan (Wuhan University, China) Parallel streaming Delaunay triangulation for LiDAR This presentation introduces a robust parallel Delaunay triangulation algorithm called ParaStream for processing billions of points from non-overlapped block LiDAR files. The algorithm targets ubiquitous multicore architectures. ParaStream integrates streaming computation with a traditional divide-and-conquer scheme, in which additional erase steps are implemented to reduce the runtime memory footprint. Furthermore, a kd-tree based dynamic schedule strategy is also proposed to distribute triangulation and merging work onto the processor cores for improved load-balance. ParaStream exploits the most of the computing power of multi-core platforms through parallel computing, demonstrating qualities of high data throughput as well as a low memory footprint. Experiments on a 2-Way-Quad-Core Intel Xeon platform show that ParaStream can triangulate approximately one billion LiDAR points (16.4 GB) in about 16 minutes with only 600 MB physical memory. The total speedup (including I/O time) is about 6.62 with 8 concurrent threads. Wiebe de Boer and Fedor Baart (Deltares) Point clouds in the Delta In the Delta research point clouds are relevant to study the present state of the Delta as well as its evolution in time. The dynamics of morphological features (i.e. coastline changes, bar migration and dune development) are relevant for coastal safety, navigation, cables, pipelines and offshore constructions. More and more point clouds are being measured continuously, for example by boats (e.g. TESO between Den Helder and Texel) and with drones. As we progress to stream the data while being measured, this will eventually allow us to perform data analysis on-the-fly and/or to use the data in operational forecasting systems. However, this also poses challenges for the storage, validation and management of the data as the point clouds vary in accuracy, spatial resolution and time coverage. In this presentation we will show what kind of point clouds are used in the Delta, where you find them and what are the challenges we are facing. Dick ten Napel (RWS) Wet and dry point cloud acquisition and applications within RWS Rijkswaterstaat is responsible for the design, construction, management and maintenance of the main infrastructure facilities in the Netherlands. This includes the main road network, the main waterway network and water systems. Information plays a major role in the implementation of tasks. Every day, a lot of data is measured, among other formats in the form of point clouds. Some examples are: depth measurements of the waterway network, inspection of constructions like bridges and sluices and measurements in order to maintain the highways. Because of the large size of the point clouds, in the current situation they are often converted to other formats before use. This is a time-consuming process and information is lost during the conversions. It is a challenge to develop new ways of working, directly involving use of point clouds, so all the information there is available to the user. Milan Uitentuis and Mark Terlien (IntellinQ) Managing and processing massive amounts of maritime point cloud data with GeolinQ Managing maritime point clouds is getting more challenging, because of increasing volume of acquired data and multiple point cloud usages. Data management is crucial to cope with these challenges. GeolinQ is a data management solution offering a combination of fast and efficient storage, visualization, web distribution and flexibility with respect to point attributes, point cloud meta data and styling. Users are able to browse point cloud data sets based on meta data and location, visualize point cloud data according to their own needs and publish point cloud data sets based on customer requirements. Special algorithms are used for efficient storage in data base tables. The algorithms optimize storage during point cloud import and generate visualization pyramids and the point cloud footprint. This approach is not limited by the number of point attributes, point cloud size, physical memory or hard disk space. As the data management solution also supports raster and vector data sets also derived data products can be managed. Bart De Lathouwer (OGC) Reporting from the OGC Point Cloud DWG (Note: Presentation by Peter van Oosterom, as Bart De Lathouwer could not attend the seminar) The use of point clouds is growing at a rapid rate and can be found in a variety of domains including utilities, mining, outdoor en indoor 3D modelling, etc. Point cloud data is currently stored in many formats, some now de facto standards, defined for many domains such as multi-dimensional scientific data, LiDAR data, elevation data, seismic data, bathymetric data, meteorological data, and fixed/mobile consumer sensors (IoT). With so many uses of point cloud data but little standardization, a variety of different formats exist. OGC membership has registered a concern that without development of best practices or consensus standards, divergence will continue in the community and interoperability will be inhibited. As an example, LiDAR data is most commonly exchanged using an ASPRS standard format known as LAS. However, end-user consumption of LAS content for analysis or display requires indexing, optimization and/or compression of the content, with multiple methods available ranging from vendor-specific indexing schemes to commercial and free optimization and compression toolsets. In a press release of 2 November 2015 the OGC and the ASPRS have announced their agreement to work together. The Point Cloud DWG is being established to address the gap in the OGC standards baseline with regard to point cloud data. Therefore, one of the first activities of the Point Cloud DWG is to develop a questionnaire, which is expected to be released in December 2016. |