fbpx
Wikipedia

3D scanning

3D scanning is the process of analyzing a real-world object or environment to collect three dimensional data of its shape and possibly its appearance (e.g. color). The collected data can then be used to construct digital 3D models.

Making a 3D-model of a Viking belt buckle using a hand held VIUscan 3D laser scanner

A 3D scanner can be based on many different technologies, each with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitised are still present. For example, optical technology may encounter many difficulties with dark, shiny, reflective or transparent objects. For example, industrial computed tomography scanning, structured-light 3D scanners, LiDAR and Time Of Flight 3D Scanners can be used to construct digital 3D models, without destructive testing.

Collected 3D data is useful for a wide variety of applications. These devices are used extensively by the entertainment industry in the production of movies and video games, including virtual reality. Other common applications of this technology include augmented reality,[1] motion capture,[2][3] gesture recognition,[4] robotic mapping,[5] industrial design, orthotics and prosthetics,[6] reverse engineering and prototyping, quality control/inspection and the digitization of cultural artifacts.[7]

Functionality edit

The purpose of a 3D scanner is usually to create a 3D model. This 3D model consists of a polygon mesh or point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If colour information is collected at each point, then the colours or textures on the surface of the subject can also be determined.

3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified.

In some situations, a single scan will not produce a complete model of the subject. Multiple scans, from different directions are usually helpful to obtain information about all sides of the subject. These scans have to be brought into a common reference system, a process that is usually called alignment or registration, and then merged to create a complete 3D model. This whole process, going from the single range map to the whole model, is usually known as the 3D scanning pipeline.[8][9][10][11][12]

Technology edit

There are a variety of technologies for digitally acquiring the shape of a 3D object. The techniques work with most or all sensor types including optical, acoustic, laser scanning,[13] radar, thermal,[14] and seismic.[15][16] A well established classification[17] divides them into two types: contact and non-contact. Non-contact solutions can be further divided into two main categories, active and passive. There are a variety of technologies that fall under each of these categories.

Contact edit

 
A coordinate measuring machine (CMM) with scanning head
 
3D scanning of a fin whale skeleton in the Natural History Museum of Slovenia (August 2013)

Contact 3D scanners work by physically probing (touching) the part and recording the position of the sensor as the probe moves around the part.

There are two main types of contact 3D scanners:

  • Coordinate measuring machines (CMMs) which traditionally have 3 perpendicular moving axis with a touch probe mounted on the Z axis. As the touch probe moves around the part, sensors on each axis record the position to generate XYZ coordinates. Modern CMMs are 5 axis systems, with the two extra axes provided by pivoting sensor heads. CMMs are the most accurate form of 3D measurement achieving micron precision. The greatest advantage of a CMM after accuracy is that it can be run in autonomous (CNC) mode or as a manual probing system. The disadvantage of CMMs is that their upfront cost and the technical knowledge required to operate them.
  • Articulated Arms which generally have multiple segments with polar sensors on each joint. As per the CMM, as the articulated arm moves around the part sensors record their position and the location of the end of the arm is calculated using complex math and the wrist rotation angle and hinge angle of each joint. While not usually as accurate as CMMs, articulated arms still achieve high accuracy and are cheaper and slightly easier to use. They do not usually have CNC options.

Both modern CMMs and Articulated Arms can also be fitted with non-contact laser scanners instead of touch probes.

Non-contact active edit

Active scanners emit some kind of radiation or light and detect its reflection or radiation passing through object in order to probe an object or environment. Possible types of emissions used include light, ultrasound or x-ray.

Time-of-flight edit

 
This lidar scanner may be used to scan buildings, rock formations, etc., to produce a 3D model. The lidar can aim its laser beam in a wide range: its head rotates horizontally, a mirror flips vertically. The laser beam is used to measure the distance to the first object on its path.

The time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject. At the heart of this type of scanner is a time-of-flight laser range finder. The laser range finder finds the distance of a surface by timing the round-trip time of a pulse of light. A laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured. Since the speed of light   is known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface. If   is the round-trip time, then distance is equal to  . The accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the   time: 3.3 picoseconds (approx.) is the time taken for light to travel 1 millimetre.

The laser range finder only detects the distance of one point in its direction of view. Thus, the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points. The view direction of the laser range finder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy. Typical time-of-flight 3D laser scanners can measure the distance of 10,000~100,000 points every second.

Time-of-flight devices are also available in a 2D configuration. This is referred to as a time-of-flight camera.[18]

Triangulation edit

 
Principle of a laser triangulation sensor. Two object positions are shown.

Triangulation based 3D laser scanners are also active scanners that use laser light to probe the environment. With respect to time-of-flight 3D laser scanner the triangulation laser shines a laser on the subject and exploits a camera to look for the location of the laser dot. Depending on how far away the laser strikes a surface, the laser dot appears at different places in the camera's field of view. This technique is called triangulation because the laser dot, the camera and the laser emitter form a triangle. The length of one side of the triangle, the distance between the camera and the laser emitter is known. The angle of the laser emitter corner is also known. The angle of the camera corner can be determined by looking at the location of the laser dot in the camera's field of view. These three pieces of information fully determine the shape and size of the triangle and give the location of the laser dot corner of the triangle.[19] In most cases a laser stripe, instead of a single laser dot, is swept across the object to speed up the acquisition process. The National Research Council of Canada was among the first institutes to develop the triangulation based laser scanning technology in 1978.[20]

Strengths and weaknesses edit

Time-of-flight and triangulation range finders each have strengths and weaknesses that make them suitable for different situations. The advantage of time-of-flight range finders is that they are capable of operating over very long distances, on the order of kilometres. These scanners are thus suitable for scanning large structures like buildings or geographic features. The disadvantage of time-of-flight range finders is their accuracy. Due to the high speed of light, timing the round-trip time is difficult and the accuracy of the distance measurement is relatively low, on the order of millimetres.

Triangulation range finders are exactly the opposite. They have a limited range of some meters, but their accuracy is relatively high. The accuracy of triangulation range finders is on the order of tens of micrometers.

Time-of-flight scanners' accuracy can be lost when the laser hits the edge of an object because the information that is sent back to the scanner is from two different locations for one laser pulse. The coordinate relative to the scanner's position for a point that has hit the edge of an object will be calculated based on an average and therefore will put the point in the wrong place. When using a high resolution scan on an object the chances of the beam hitting an edge are increased and the resulting data will show noise just behind the edges of the object. Scanners with a smaller beam width will help to solve this problem but will be limited by range as the beam width will increase over distance. Software can also help by determining that the first object to be hit by the laser beam should cancel out the second.

At a rate of 10,000 sample points per second, low resolution scans can take less than a second, but high resolution scans, requiring millions of samples, can take minutes for some time-of-flight scanners. The problem this creates is distortion from motion. Since each point is sampled at a different time, any motion in the subject or the scanner will distort the collected data. Thus, it is usually necessary to mount both the subject and the scanner on stable platforms and minimise vibration. Using these scanners to scan objects in motion is very difficult.

Recently, there has been research on compensating for distortion from small amounts of vibration[21] and distortions due to motion and/or rotation.[22]

Short-range laser scanners can not usually encompass a depth of field more than 1 meter.[23] When scanning in one position for any length of time slight movement can occur in the scanner position due to changes in temperature. If the scanner is set on a tripod and there is strong sunlight on one side of the scanner then that side of the tripod will expand and slowly distort the scan data from one side to another. Some laser scanners have level compensators built into them to counteract any movement of the scanner during the scan process.

Conoscopic holography edit

In a conoscopic system, a laser beam is projected onto the surface and then the immediate reflection along the same ray-path are put through a conoscopic crystal and projected onto a CCD. The result is a diffraction pattern, that can be frequency analyzed to determine the distance to the measured surface. The main advantage with conoscopic holography is that only a single ray-path is needed for measuring, thus giving an opportunity to measure for instance the depth of a finely drilled hole.[24]

Hand-held laser scanners edit

Hand-held laser scanners create a 3D image through the triangulation mechanism described above: a laser dot or line is projected onto an object from a hand-held device and a sensor (typically a charge-coupled device or position sensitive device) measures the distance to the surface. Data is collected in relation to an internal coordinate system and therefore to collect data where the scanner is in motion the position of the scanner must be determined. The position can be determined by the scanner using reference features on the surface being scanned (typically adhesive reflective tabs, but natural features have been also used in research work)[25][26] or by using an external tracking method. External tracking often takes the form of a laser tracker (to provide the sensor position) with integrated camera (to determine the orientation of the scanner) or a photogrammetric solution using 3 or more cameras providing the complete six degrees of freedom of the scanner. Both techniques tend to use infrared light-emitting diodes attached to the scanner which are seen by the camera(s) through filters providing resilience to ambient lighting.[27]

Data is collected by a computer and recorded as data points within three-dimensional space, with processing this can be converted into a triangulated mesh and then a computer-aided design model, often as non-uniform rational B-spline surfaces. Hand-held laser scanners can combine this data with passive, visible-light sensors — which capture surface textures and colors — to build (or "reverse engineer") a full 3D model.

Structured light edit

Structured-light 3D scanners project a pattern of light on the subject and look at the deformation of the pattern on the subject. The pattern is projected onto the subject using either an LCD projector or other stable light source. A camera, offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view.

Structured-light scanning is still a very active area of research with many research papers published each year. Perfect maps have also been proven useful as structured light patterns that solve the correspondence problem and allow for error detection and error correction.[28]

The advantage of structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion. Some existing systems are capable of scanning moving objects in real-time.

A real-time scanner using digital fringe projection and phase-shifting technique (certain kinds of structured light methods) was developed, to capture, reconstruct, and render high-density details of dynamically deformable objects (such as facial expressions) at 40 frames per second.[29] Recently, another scanner has been developed. Different patterns can be applied to this system, and the frame rate for capturing and data processing achieves 120 frames per second. It can also scan isolated surfaces, for example two moving hands.[30] By utilising the binary defocusing technique, speed breakthroughs have been made that could reach hundreds[31] to thousands of frames per second.[32]

Modulated light edit

Modulated light 3D scanners shine a continually changing light at the subject. Usually the light source simply cycles its amplitude in a sinusoidal pattern. A camera detects the reflected light and the amount the pattern is shifted by determines the distance the light travelled. Modulated light also allows the scanner to ignore light from sources other than a laser, so there is no interference.

Volumetric techniques edit

Medical edit

Computed tomography (CT) is a medical imaging method which generates a three-dimensional image of the inside of an object from a large series of two-dimensional X-ray images, similarly magnetic resonance imaging is another medical imaging technique that provides much greater contrast between the different soft tissues of the body than computed tomography (CT) does, making it especially useful in neurological (brain), musculoskeletal, cardiovascular, and oncological (cancer) imaging. These techniques produce a discrete 3D volumetric representation that can be directly visualised, manipulated or converted to traditional 3D surface by mean of isosurface extraction algorithms.

Industrial edit

Although most common in medicine, industrial computed tomography, microtomography and MRI are also used in other fields for acquiring a digital representation of an object and its interior, such as non destructive materials testing, reverse engineering, or studying biological and paleontological specimens.

Non-contact passive edit

Passive 3D imaging solutions do not emit any kind of radiation themselves, but instead rely on detecting reflected ambient radiation. Most solutions of this type detect visible light because it is a readily available ambient radiation. Other types of radiation, such as infrared could also be used. Passive methods can be very cheap, because in most cases they do not need particular hardware but simple digital cameras.

  • Stereoscopic systems usually employ two video cameras, slightly apart, looking at the same scene. By analysing the slight differences between the images seen by each camera, it is possible to determine the distance at each point in the images. This method is based on the same principles driving human stereoscopic vision.[33]
  • Photometric systems usually use a single camera, but take multiple images under varying lighting conditions. These techniques attempt to invert the image formation model in order to recover the surface orientation at each pixel.
  • Silhouette techniques use outlines created from a sequence of photographs around a three-dimensional object against a well contrasted background. These silhouettes are extruded and intersected to form the visual hull approximation of the object. With these approaches some concavities of an object (like the interior of a bowl) cannot be detected.

Photogrammetric non-contact passive methods edit

 
Images taken from multiple perspectives such as a fixed camera array can be taken of a subject for a photogrammetric reconstruction pipeline to generate a 3D mesh or point cloud.

Photogrammetry provides reliable information about 3D shapes of physical objects based on analysis of photographic images. The resulting 3D data is typically provided as a 3D point cloud, 3D mesh or 3D points.[34] Modern photogrammetry software applications automatically analyze a large number of digital images for 3D reconstruction, however manual interaction may be required if the software cannot automatically determine the 3D positions of the camera in the images which is an essential step in the reconstruction pipeline. Various software packages are available including PhotoModeler, Geodetic Systems, Autodesk ReCap, RealityCapture and Agisoft Metashape (see comparison of photogrammetry software).

  • Close range photogrammetry typically uses a handheld camera such as a DSLR with a fixed focal length lens to capture images of objects for 3D reconstruction.[35] Subjects include smaller objects such as a building facade, vehicles, sculptures, rocks, and shoes.
  • Camera Arrays can be used to generate 3D point clouds or meshes of live objects such as people or pets by synchronizing multiple cameras to photograph a subject from multiple perspectives at the same time for 3D object reconstruction.[36]
  • Wide angle photogrammetry can be used to capture the interior of buildings or enclosed spaces using a wide angle lens camera such as a 360 camera.
  • Aerial photogrammetry uses aerial images acquired by satellite, commercial aircraft or UAV drone to collect images of buildings, structures and terrain for 3D reconstruction into a point cloud or mesh.

Acquisition from acquired sensor data edit

Semi-automatic building extraction from lidar data and high-resolution images is also a possibility. Again, this approach allows modelling without physically moving towards the location or object.[37] From airborne lidar data, digital surface model (DSM) can be generated and then the objects higher than the ground are automatically detected from the DSM. Based on general knowledge about buildings, geometric characteristics such as size, height and shape information are then used to separate the buildings from other objects. The extracted building outlines are then simplified using an orthogonal algorithm to obtain better cartographic quality. Watershed analysis can be conducted to extract the ridgelines of building roofs. The ridgelines as well as slope information are used to classify the buildings per type. The buildings are then reconstructed using three parametric building models (flat, gabled, hipped).[38]

Acquisition from on-site sensors edit

Lidar and other terrestrial laser scanning technology[39] offers the fastest, automated way to collect height or distance information. lidar or laser for height measurement of buildings is becoming very promising.[40] Commercial applications of both airborne lidar and ground laser scanning technology have proven to be fast and accurate methods for building height extraction. The building extraction task is needed to determine building locations, ground elevation, orientations, building size, rooftop heights, etc. Most buildings are described to sufficient details in terms of general polyhedra, i.e., their boundaries can be represented by a set of planar surfaces and straight lines. Further processing such as expressing building footprints as polygons is used for data storing in GIS databases.

Using laser scans and images taken from ground level and a bird's-eye perspective, Fruh and Zakhor present an approach to automatically create textured 3D city models. This approach involves registering and merging the detailed facade models with a complementary airborne model. The airborne modeling process generates a half-meter resolution model with a bird's-eye view of the entire area, containing terrain profile and building tops. Ground-based modeling process results in a detailed model of the building facades. Using the DSM obtained from airborne laser scans, they localize the acquisition vehicle and register the ground-based facades to the airborne model by means of Monte Carlo localization (MCL). Finally, the two models are merged with different resolutions to obtain a 3D model.

Using an airborne laser altimeter, Haala, Brenner and Anders combined height data with the existing ground plans of buildings. The ground plans of buildings had already been acquired either in analog form by maps and plans or digitally in a 2D GIS. The project was done in order to enable an automatic data capture by the integration of these different types of information. Afterwards virtual reality city models are generated in the project by texture processing, e.g. by mapping of terrestrial images. The project demonstrated the feasibility of rapid acquisition of 3D urban GIS. Ground plans proved are another very important source of information for 3D building reconstruction. Compared to results of automatic procedures, these ground plans proved more reliable since they contain aggregated information which has been made explicit by human interpretation. For this reason, ground plans, can considerably reduce costs in a reconstruction project. An example of existing ground plan data usable in building reconstruction is the Digital Cadastral map, which provides information on the distribution of property, including the borders of all agricultural areas and the ground plans of existing buildings. Additionally information as street names and the usage of buildings (e.g. garage, residential building, office block, industrial building, church) is provided in the form of text symbols. At the moment the Digital Cadastral map is built up as a database covering an area, mainly composed by digitizing preexisting maps or plans.

Cost edit

  • Terrestrial laser scan devices (pulse or phase devices) + processing software generally start at a price of €150,000. Some less precise devices (as the Trimble VX) cost around €75,000.
  • Terrestrial lidar systems cost around €300,000.
  • Systems using regular still cameras mounted on RC helicopters (Photogrammetry) are also possible, and cost around €25,000. Systems that use still cameras with balloons are even cheaper (around €2,500), but require additional manual processing. As the manual processing takes around one month of labor for every day of taking pictures, this is still an expensive solution in the long run.
  • Obtaining satellite images is also an expensive endeavor. High resolution stereo images (0.5 m resolution) cost around €11,000. Image satellites include Quikbird, Ikonos. High resolution monoscopic images cost around €5,500. Somewhat lower resolution images (e.g. from the CORONA satellite; with a 2 m resolution) cost around €1,000 per 2 images. Note that Google Earth images are too low in resolution to make an accurate 3D model.[41]

Reconstruction edit

From point clouds edit

The point clouds produced by 3D scanners and 3D imaging can be used directly for measurement and visualisation in the architecture and construction world.

From models edit

Most applications, however, use instead polygonal 3D models, NURBS surface models, or editable feature-based CAD models (aka solid models).

  • Polygon mesh models: In a polygonal representation of a shape, a curved surface is modeled as many small faceted flat surfaces (think of a sphere modeled as a disco ball). Polygon models—also called Mesh models, are useful for visualisation, for some CAM (i.e., machining), but are generally "heavy" ( i.e., very large data sets), and are relatively un-editable in this form. Reconstruction to polygonal model involves finding and connecting adjacent points with straight lines in order to create a continuous surface. Many applications, both free and nonfree, are available for this purpose (e.g. GigaMesh, MeshLab, PointCab, kubit PointCloud for AutoCAD, Reconstructor, imagemodel, PolyWorks, Rapidform, Geomagic, Imageware, Rhino 3D etc.).
  • Surface models: The next level of sophistication in modeling involves using a quilt of curved surface patches to model the shape. These might be NURBS, TSplines or other curved representations of curved topology. Using NURBS, the spherical shape becomes a true mathematical sphere. Some applications offer patch layout by hand but the best in class offer both automated patch layout and manual layout. These patches have the advantage of being lighter and more manipulable when exported to CAD. Surface models are somewhat editable, but only in a sculptural sense of pushing and pulling to deform the surface. This representation lends itself well to modelling organic and artistic shapes. Providers of surface modellers include Rapidform, Geomagic, Rhino 3D, Maya, T Splines etc.
  • Solid CAD models: From an engineering/manufacturing perspective, the ultimate representation of a digitised shape is the editable, parametric CAD model. In CAD, the sphere is described by parametric features which are easily edited by changing a value (e.g., centre point and radius).

These CAD models describe not simply the envelope or shape of the object, but CAD models also embody the "design intent" (i.e., critical features and their relationship to other features). An example of design intent not evident in the shape alone might be a brake drum's lug bolts, which must be concentric with the hole in the centre of the drum. This knowledge would drive the sequence and method of creating the CAD model; a designer with an awareness of this relationship would not design the lug bolts referenced to the outside diameter, but instead, to the center. A modeler creating a CAD model will want to include both Shape and design intent in the complete CAD model.

Vendors offer different approaches to getting to the parametric CAD model. Some export the NURBS surfaces and leave it to the CAD designer to complete the model in CAD (e.g., Geomagic, Imageware, Rhino 3D). Others use the scan data to create an editable and verifiable feature based model that is imported into CAD with full feature tree intact, yielding a complete, native CAD model, capturing both shape and design intent (e.g. Geomagic, Rapidform). For instance, the market offers various plug-ins for established CAD-programs, such as SolidWorks. Xtract3D, DezignWorks and Geomagic for SolidWorks allow manipulating a 3D scan directly inside SolidWorks. Still other CAD applications are robust enough to manipulate limited points or polygon models within the CAD environment (e.g., CATIA, AutoCAD, Revit).

From a set of 2D slices edit

 
3D reconstruction of the brain and eyeballs from CT scanned DICOM images. In this image, areas with the density of bone or air were made transparent, and the slices stacked up in an approximate free-space alignment. The outer ring of material around the brain are the soft tissues of skin and muscle on the outside of the skull. A black box encloses the slices to provide the black background. Since these are simply 2D images stacked up, when viewed on edge the slices disappear since they have effectively zero thickness. Each DICOM scan represents about 5 mm of material averaged into a thin slice.

CT, industrial CT, MRI, or micro-CT scanners do not produce point clouds but a set of 2D slices (each termed a "tomogram") which are then 'stacked together' to produce a 3D representation. There are several ways to do this depending on the output required:

  • Volume rendering: Different parts of an object usually have different threshold values or greyscale densities. From this, a 3-dimensional model can be constructed and displayed on screen. Multiple models can be constructed from various thresholds, allowing different colours to represent each component of the object. Volume rendering is usually only used for visualisation of the scanned object.
  • Image segmentation: Where different structures have similar threshold/greyscale values, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image. Image segmentation software usually allows export of the segmented structures in CAD or STL format for further manipulation.
  • Image-based meshing: When using 3D image data for computational analysis (e.g. CFD and FEA), simply segmenting the data and meshing from CAD can become time-consuming, and virtually intractable for the complex topologies typical of image data. The solution is called image-based meshing, an automated process of generating an accurate and realistic geometrical description of the scan data.

From laser scans edit

Laser scanning describes the general method to sample or scan a surface using laser technology. Several areas of application exist that mainly differ in the power of the lasers that are used, and in the results of the scanning process. Low laser power is used when the scanned surface doesn't have to be influenced, e.g. when it only has to be digitised. Confocal or 3D laser scanning are methods to get information about the scanned surface. Another low-power application uses structured light projection systems for solar cell flatness metrology,[42] enabling stress calculation throughout in excess of 2000 wafers per hour.[43]

The laser power used for laser scanning equipment in industrial applications is typically less than 1W. The power level is usually on the order of 200 mW or less but sometimes more.

From photographs edit

3D data acquisition and object reconstruction can be performed using stereo image pairs. Stereo photogrammetry or photogrammetry based on a block of overlapped images is the primary approach for 3D mapping and object reconstruction using 2D images. Close-range photogrammetry has also matured to the level where cameras or digital cameras can be used to capture the close-look images of objects, e.g., buildings, and reconstruct them using the very same theory as the aerial photogrammetry. An example of software which could do this is Vexcel FotoG 5.[44][45] This software has now been replaced by Vexcel GeoSynth.[46] Another similar software program is Microsoft Photosynth.[47][48]

A semi-automatic method for acquiring 3D topologically structured data from 2D aerial stereo images has been presented by Sisi Zlatanova.[49] The process involves the manual digitizing of a number of points necessary for automatically reconstructing the 3D objects. Each reconstructed object is validated by superimposition of its wire frame graphics in the stereo model. The topologically structured 3D data is stored in a database and are also used for visualization of the objects. Notable software used for 3D data acquisition using 2D images include e.g. Agisoft Metashape,[50] RealityCapture,[51] and ENSAIS Engineering College TIPHON (Traitement d'Image et PHOtogrammétrie Numérique).[52]

A method for semi-automatic building extraction together with a concept for storing building models alongside terrain and other topographic data in a topographical information system has been developed by Franz Rottensteiner. His approach was based on the integration of building parameter estimations into the photogrammetry process applying a hybrid modeling scheme. Buildings are decomposed into a set of simple primitives that are reconstructed individually and are then combined by Boolean operators. The internal data structure of both the primitives and the compound building models are based on the boundary representation methods[53][54]

Multiple images are used in Zhang's[55] approach to surface reconstruction from multiple images. A central idea is to explore the integration of both 3D stereo data and 2D calibrated images. This approach is motivated by the fact that only robust and accurate feature points that survived the geometry scrutiny of multiple images are reconstructed in space. The density insufficiency and the inevitable holes in the stereo data should then be filled in by using information from multiple images. The idea is thus to first construct small surface patches from stereo points, then to progressively propagate only reliable patches in their neighborhood from images into the whole surface using a best-first strategy. The problem thus reduces to searching for an optimal local surface patch going through a given set of stereo points from images.

Multi-spectral images are also used for 3D building detection. The first and last pulse data and the normalized difference vegetation index are used in the process.[56]

New measurement techniques are also employed to obtain measurements of and between objects from single images by using the projection, or the shadow as well as their combination. This technology is gaining attention given its fast processing time, and far lower cost than stereo measurements.[citation needed]

Applications edit

Space experiments edit

3D scanning technology has been used to scan space rocks for the European Space Agency.[57][58]

Construction industry and civil engineering edit

  • Robotic control: e.g. a laser scanner may function as the "eye" of a robot.[59][60]
  • As-built drawings of bridges, industrial plants, and monuments
  • Documentation of historical sites[61]
  • Site modelling and lay outing
  • Quality control
  • Quantity surveys
  • Payload monitoring [62]
  • Freeway redesign
  • Establishing a bench mark of pre-existing shape/state in order to detect structural changes resulting from exposure to extreme loadings such as earthquake, vessel/truck impact or fire.
  • Create GIS (geographic information system) maps[63] and geomatics.
  • Subsurface laser scanning in mines and karst voids.[64]
  • Forensic documentation[65]

Design process edit

  • Increasing accuracy working with complex parts and shapes,
  • Coordinating product design using parts from multiple sources,
  • Updating old CD scans with those from more current technology,
  • Replacing missing or older parts,
  • Creating cost savings by allowing as-built design services, for example in automotive manufacturing plants,
  • "Bringing the plant to the engineers" with web shared scans, and
  • Saving travel costs.

Entertainment edit

3D scanners are used by the entertainment industry to create digital 3D models for movies, video games and leisure purposes.[66] They are heavily utilized in virtual cinematography. In cases where a real-world equivalent of a model exists, it is much faster to scan the real-world object than to manually create a model using 3D modeling software. Frequently, artists sculpt physical models of what they want and scan them into digital form rather than directly creating digital models on a computer.

3D photography edit

 
3D selfie in 1:20 scale printed by Shapeways using gypsum-based printing, created by Madurodam miniature park from 2D pictures taken at its Fantasitron photo booth
 
Fantasitron 3D photo booth at Madurodam

3D scanners are evolving for the use of cameras to represent 3D objects in an accurate manner.[67] Companies are emerging since 2010 that create 3D portraits of people (3D figurines or 3D selfie).

An augmented reality menu for the Madrid restaurant chain 80 Degrees[68]

Law enforcement edit

3D laser scanning is used by the law enforcement agencies around the world. 3D models are used for on-site documentation of:[69]

  • Crime scenes
  • Bullet trajectories
  • Bloodstain pattern analysis
  • Accident reconstruction
  • Bombings
  • Plane crashes, and more

Reverse engineering edit

Reverse engineering of a mechanical component requires a precise digital model of the objects to be reproduced. Rather than a set of points a precise digital model can be represented by a polygon mesh, a set of flat or curved NURBS surfaces, or ideally for mechanical components, a CAD solid model. A 3D scanner can be used to digitise free-form or gradually changing shaped components as well as prismatic geometries whereas a coordinate measuring machine is usually used only to determine simple dimensions of a highly prismatic model. These data points are then processed to create a usable digital model, usually using specialized reverse engineering software.

Real estate edit

Land or buildings can be scanned into a 3D model, which allows buyers to tour and inspect the property remotely, anywhere, without having to be present at the property.[70] There is already at least one company providing 3D-scanned virtual real estate tours.[71] A typical virtual tour 2017-04-27 at the Wayback Machine would consist of dollhouse view,[72] inside view, as well as a floor plan.

Virtual/remote tourism edit

The environment at a place of interest can be captured and converted into a 3D model. This model can then be explored by the public, either through a VR interface or a traditional "2D" interface. This allows the user to explore locations which are inconvenient for travel.[73] A group of history students at Vancouver iTech Preparatory Middle School created a Virtual Museum by 3D Scanning more than 100 artifacts.[74]

Cultural heritage edit

There have been many research projects undertaken via the scanning of historical sites and artifacts both for documentation and analysis purposes.[75] The resulting models can be used for a variety of different analytical approaches.[76][77]

The combined use of 3D scanning and 3D printing technologies allows the replication of real objects without the use of traditional plaster casting techniques, that in many cases can be too invasive for being performed on precious or delicate cultural heritage artifacts.[78] In an example of a typical application scenario, a gargoyle model was digitally acquired using a 3D scanner and the produced 3D data was processed using MeshLab. The resulting digital 3D model was fed to a rapid prototyping machine to create a real resin replica of the original object.

Creation of 3D models for Museums and Archaeological artifacts[79][80][81]

Michelangelo edit

In 1999, two different research groups started scanning Michelangelo's statues. Stanford University with a group led by Marc Levoy[82] used a custom laser triangulation scanner built by Cyberware to scan Michelangelo's statues in Florence, notably the David, the Prigioni and the four statues in The Medici Chapel. The scans produced a data point density of one sample per 0.25 mm, detailed enough to see Michelangelo's chisel marks. These detailed scans produced a large amount of data (up to 32 gigabytes) and processing the data from his scans took 5 months. Approximately in the same period a research group from IBM, led by H. Rushmeier and F. Bernardini scanned the Pietà of Florence acquiring both geometric and colour details. The digital model, result of the Stanford scanning campaign, was thoroughly used in the 2004 subsequent restoration of the statue.[83]

Monticello edit

In 2002, David Luebke, et al. scanned Thomas Jefferson's Monticello.[84] A commercial time of flight laser scanner, the DeltaSphere 3000, was used. The scanner data was later combined with colour data from digital photographs to create the Virtual Monticello, and the Jefferson's Cabinet exhibits in the New Orleans Museum of Art in 2003. The Virtual Monticello exhibit simulated a window looking into Jefferson's Library. The exhibit consisted of a rear projection display on a wall and a pair of stereo glasses for the viewer. The glasses, combined with polarised projectors, provided a 3D effect. Position tracking hardware on the glasses allowed the display to adapt as the viewer moves around, creating the illusion that the display is actually a hole in the wall looking into Jefferson's Library. The Jefferson's Cabinet exhibit was a barrier stereogram (essentially a non-active hologram that appears different from different angles) of Jefferson's Cabinet.

Cuneiform tablets edit

The first 3D models of cuneiform tablets were acquired in Germany in 2000.[85] In 2003 the so-called Digital Hammurabi project acquired cuneiform tablets with a laser triangulation scanner using a regular grid pattern having a resolution of 0.025 mm (0.00098 in).[86] With the use of high-resolution 3D-scanners by the Heidelberg University for tablet acquisition in 2009 the development of the GigaMesh Software Framework began to visualize and extract cuneiform characters from 3D-models.[87] It was used to process ca. 2.000 3D-digitized tablets of the Hilprecht Collection in Jena to create an Open Access benchmark dataset[88] and an annotated collection[89] of 3D-models of tablets freely available under CC BY licenses.[90]

Kasubi Tombs edit

A 2009 CyArk 3D scanning project at Uganda's historic Kasubi Tombs, a UNESCO World Heritage Site, using a Leica HDS 4500, produced detailed architectural models of Muzibu Azaala Mpanga, the main building at the complex and tomb of the Kabakas (Kings) of Uganda. A fire on March 16, 2010, burned down much of the Muzibu Azaala Mpanga structure, and reconstruction work is likely to lean heavily upon the dataset produced by the 3D scan mission.[91]

"Plastico di Roma antica" edit

In 2005, Gabriele Guidi, et al. scanned the "Plastico di Roma antica",[92] a model of Rome created in the last century. Neither the triangulation method, nor the time of flight method satisfied the requirements of this project because the item to be scanned was both large and contained small details. They found though, that a modulated light scanner was able to provide both the ability to scan an object the size of the model and the accuracy that was needed. The modulated light scanner was supplemented by a triangulation scanner which was used to scan some parts of the model.

Other projects edit

The 3D Encounters Project at the Petrie Museum of Egyptian Archaeology aims to use 3D laser scanning to create a high quality 3D image library of artefacts and enable digital travelling exhibitions of fragile Egyptian artefacts, English Heritage has investigated the use of 3D laser scanning for a wide range of applications to gain archaeological and condition data, and the National Conservation Centre in Liverpool has also produced 3D laser scans on commission, including portable object and in situ scans of archaeological sites.[93] The Smithsonian Institution has a project called Smithsonian X 3D notable for the breadth of types of 3D objects they are attempting to scan. These include small objects such as insects and flowers, to human sized objects such as Amelia Earhart's Flight Suit to room sized objects such as the Gunboat Philadelphia to historic sites such as Liang Bua in Indonesia. Also of note the data from these scans is being made available to the public for free and downloadable in several data formats.

Medical CAD/CAM edit

3D scanners are used to capture the 3D shape of a patient in orthotics and dentistry. It gradually supplants tedious plaster cast. CAD/CAM software are then used to design and manufacture the orthosis, prosthesis[94] or dental implants.

Many Chairside dental CAD/CAM systems and Dental Laboratory CAD/CAM systems use 3D Scanner technologies to capture the 3D surface of a dental preparation (either in vivo or in vitro), in order to produce a restoration digitally using CAD software and ultimately produce the final restoration using a CAM technology (such as a CNC milling machine, or 3D printer). The chairside systems are designed to facilitate the 3D scanning of a preparation in vivo and produce the restoration (such as a Crown, Onlay, Inlay or Veneer).

Creation of 3D models for Anatomy and Biology education[95][96] and cadaver models for educational neurosurgical simulations.[97]

Quality assurance and industrial metrology edit

The digitalisation of real-world objects is of vital importance in various application domains. This method is especially applied in industrial quality assurance to measure the geometric dimension accuracy. Industrial processes such as assembly are complex, highly automated and typically based on CAD (computer-aided design) data. The problem is that the same degree of automation is also required for quality assurance. It is, for example, a very complex task to assemble a modern car, since it consists of many parts that must fit together at the very end of the production line. The optimal performance of this process is guaranteed by quality assurance systems. Especially the geometry of the metal parts must be checked in order to assure that they have the correct dimensions, fit together and finally work reliably.

Within highly automated processes, the resulting geometric measures are transferred to machines that manufacture the desired objects. Due to mechanical uncertainties and abrasions, the result may differ from its digital nominal. In order to automatically capture and evaluate these deviations, the manufactured part must be digitised as well. For this purpose, 3D scanners are applied to generate point samples from the object's surface which are finally compared against the nominal data.[98]

The process of comparing 3D data against a CAD model is referred to as CAD-Compare, and can be a useful technique for applications such as determining wear patterns on moulds and tooling, determining accuracy of final build, analysing gap and flush, or analysing highly complex sculpted surfaces. At present, laser triangulation scanners, structured light and contact scanning are the predominant technologies employed for industrial purposes, with contact scanning remaining the slowest, but overall most accurate option. Nevertheless, 3D scanning technology offers distinct advantages compared to traditional touch probe measurements. White-light or laser scanners accurately digitize objects all around, capturing fine details and freeform surfaces without reference points or spray. The entire surface is covered at record speed without the risk of damaging the part. Graphic comparison charts illustrate geometric deviations of full object level, providing deeper insights into potential causes.[99][100]

Circumvention of shipping costs and international import/export tariffs edit

3D scanning can be used in conjunction with 3D printing technology to virtually teleport certain object across distances without the need of shipping them and in some cases incurring import/export tariffs. For example, a plastic object can be 3D-scanned in the United States, the files can be sent off to a 3D-printing facility over in Germany where the object is replicated, effectively teleporting the object across the globe. In the future, as 3D scanning and 3D printing technologies become more and more prevalent, governments around the world will need to reconsider and rewrite trade agreements and international laws.

Object reconstruction edit

After the data has been collected, the acquired (and sometimes already processed) data from images or sensors needs to be reconstructed. This may be done in the same program or in some cases, the 3D data needs to be exported and imported into another program for further refining, and/or to add additional data. Such additional data could be GPS-location data. After the reconstruction, the data might be directly implemented into a local (GIS) map[101][102] or a worldwide map such as Google Earth or Apple Maps.

Software edit

Several software packages are used in which the acquired (and sometimes already processed) data from images or sensors is imported. Notable software packages include:[103]

See also edit

References edit

  1. ^ Izadi, Shahram; Davison, Andrew; Fitzgibbon, Andrew; Kim, David; Hilliges, Otmar; Molyneaux, David; Newcombe, Richard; Kohli, Pushmeet; Shotton, Jamie; Hodges, Steve; Freeman, Dustin (2011). "Kinect Fusion". Proceedings of the 24th annual ACM symposium on User interface software and technology - UIST '11. p. 559. doi:10.1145/2047196.2047270. ISBN 9781450307161. S2CID 3345516.
  2. ^ Moeslund, Thomas B.; Granum, Erik (1 March 2001). "A Survey of Computer Vision-Based Human Motion Capture". Computer Vision and Image Understanding. 81 (3): 231–268. CiteSeerX 10.1.1.108.203. doi:10.1006/cviu.2000.0897.
  3. ^ Wand, Michael; Adams, Bart; Ovsjanikov, Maksim; Berner, Alexander; Bokeloh, Martin; Jenke, Philipp; Guibas, Leonidas; Seidel, Hans-Peter; Schilling, Andreas (April 2009). "Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data". ACM Transactions on Graphics. 28 (2): 1–15. CiteSeerX 10.1.1.230.1675. doi:10.1145/1516522.1516526. S2CID 9881027.
  4. ^ Biswas, K. K.; Basu, Saurav Kumar (2011). "Gesture recognition using Microsoft Kinect®". The 5th International Conference on Automation, Robotics and Applications. pp. 100–103. doi:10.1109/ICARA.2011.6144864. ISBN 978-1-4577-0330-0. S2CID 8464855.
  5. ^ Kim, Pileun; Chen, Jingdao; Cho, Yong K. (May 2018). "SLAM-driven robotic mapping and registration of 3D point clouds". Automation in Construction. 89: 38–48. doi:10.1016/j.autcon.2018.01.009.
  6. ^ Scott, Clare (2018-04-19). "3D Scanning and 3D Printing Allow for Production of Lifelike Facial Prosthetics". 3DPrint.com.
  7. ^ O'Neal, Bridget (2015-02-19). "CyArk 500 Challenge Gains Momentum in Preserving Cultural Heritage with Artec 3D Scanning Technology". 3DPrint.com.
  8. ^ Fausto Bernardini, Holly E. Rushmeier (2002). "The 3D Model Acquisition Pipeline". Computer Graphics Forum. 21 (2): 149–172. CiteSeerX 10.1.1.94.7486. doi:10.1111/1467-8659.00574. S2CID 15779281.
  9. ^ "Matter and Form - 3D Scanning Hardware & Software". matterandform.net. Retrieved 2020-04-01.
  10. ^ OR3D. "What is 3D Scanning? - Scanning Basics and Devices". OR3D. Retrieved 2020-04-01.{{cite web}}: CS1 maint: numeric names: authors list (link)
  11. ^ "3D scanning technologies - what is 3D scanning and how does it work?". Aniwaa. Retrieved 2020-04-01.
  12. ^ "what is 3d scanning". laserdesign.com.
  13. ^ Hammoudi, Karim (2011). Contributions to the 3D city modeling : 3D polyhedral building model reconstruction from aerial images and 3D facade modeling from terrestrial 3D point cloud and images (Thesis). CiteSeerX 10.1.1.472.8586.
  14. ^ Pinggera, P.; Breckon, T.P.; Bischof, H. (September 2012). "On Cross-Spectral Stereo Matching using Dense Gradient Features" (PDF). Proc. British Machine Vision Conference. pp. 526.1–526.12. doi:10.5244/C.26.103. ISBN 978-1-901725-46-9. Retrieved 8 April 2013.[permanent dead link]
  15. ^ . Archived from the original on 2016-03-03. Retrieved 2021-01-24.
  16. ^ . Archived from the original on 2009-09-03. Retrieved 2009-09-09.
  17. ^ Brian Curless (November 2000). "From range scans to 3D models". ACM SIGGRAPH Computer Graphics. 33 (4): 38–41. doi:10.1145/345370.345399. S2CID 442358.
  18. ^ Cui, Yan; Schuon, Sebastian; Chan, Derek; Thrun, Sebastian; Theobalt, Christian (2010). "3D shape scanning with a time-of-flight camera". 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 1173–1180. doi:10.1109/CVPR.2010.5540082. ISBN 978-1-4244-6984-0. S2CID 2084943.
  19. ^ Franca, J.G.D.M.; Gazziro, M.A.; Ide, A.N.; Saito, J.H. (2005). "A 3D scanning system based on laser triangulation and variable field of view". IEEE International Conference on Image Processing 2005. pp. I-425. doi:10.1109/ICIP.2005.1529778. ISBN 978-0-7803-9134-5. S2CID 17914887.
  20. ^ Roy Mayer (1999). Scientific Canadian: Invention and Innovation From Canada's National Research Council. Vancouver: Raincoast Books. ISBN 978-1-55192-266-9. OCLC 41347212.
  21. ^ François Blais; Michel Picard; Guy Godin (6–9 September 2004). "Accurate 3D acquisition of freely moving objects". 2nd International Symposium on 3D Data Processing, Visualisation, and Transmission, 3DPVT 2004, Thessaloniki, Greece. Los Alamitos, CA: IEEE Computer Society. pp. 422–9. ISBN 0-7695-2223-8.
  22. ^ Goel, Salil; Lohani, Bharat (January 2014). "A Motion Correction Technique for Laser Scanning of Moving Objects". IEEE Geoscience and Remote Sensing Letters. 11 (1): 225–228. Bibcode:2014IGRSL..11..225G. doi:10.1109/LGRS.2013.2253444. S2CID 20531808.
  23. ^ . Virtual Technology. Archived from the original on 8 December 2020. Retrieved 8 November 2020.
  24. ^ Sirat, Gabriel; Psaltis, Demetri (1 January 1985). "Conoscopic holography" (PDF). Optics Letters. 10 (1): 4–6. Bibcode:1985OptL...10....4S. doi:10.1364/OL.10.000004. PMID 19724327.
  25. ^ Strobl, K. H.; Mair, E.; Bodenmuller, T.; Kielhofer, S.; Sepp, W.; Suppa, M.; Burschka, D.; Hirzinger, G. (2009). "The self-referenced DLR 3D-modeler". 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 21–28. doi:10.1109/IROS.2009.5354708. ISBN 978-1-4244-3803-7. S2CID 3576337.
  26. ^ Strobl, Klaus H.; Mair, Elmar; Hirzinger, Gerd (2011). "Image-based pose estimation for 3-D modeling in rapid, hand-held motion" (PDF). 2011 IEEE International Conference on Robotics and Automation. pp. 2593–2600. doi:10.1109/ICRA.2011.5979944. ISBN 978-1-61284-386-5. S2CID 2921156.
  27. ^ Trost, D. (1999). U.S. Patent No. 5,957,915. Washington, DC: U.S. Patent and Trademark Office.
  28. ^ Morano, R.A.; Ozturk, C.; Conn, R.; Dubin, S.; Zietz, S.; Nissano, J. (March 1998). "Structured light using pseudorandom codes". IEEE Transactions on Pattern Analysis and Machine Intelligence. 20 (3): 322–327. doi:10.1109/34.667888.
  29. ^ Huang, Peisen S. (1 December 2006). "High-resolution, real-time three-dimensional shape measurement". Optical Engineering. 45 (12): 123601. Bibcode:2006OptEn..45l3601Z. doi:10.1117/1.2402128.
  30. ^ Liu, Kai; Wang, Yongchang; Lau, Daniel L.; Hao, Qi; Hassebrook, Laurence G. (1 March 2010). "Dual-frequency pattern scheme for high-speed 3-D shape measurement". Optics Express. 18 (5): 5229–5244. Bibcode:2010OExpr..18.5229L. doi:10.1364/OE.18.005229. PMID 20389536.
  31. ^ Zhang, Song; Van Der Weide, Daniel; Oliver, James (26 April 2010). "Superfast phase-shifting method for 3-D shape measurement". Optics Express. 18 (9): 9684–9689. Bibcode:2010OExpr..18.9684Z. doi:10.1364/OE.18.009684. PMID 20588818.
  32. ^ Wang, Yajun; Zhang, Song (14 March 2011). "Superfast multifrequency phase-shifting technique with optimal pulse width modulation". Optics Express. 19 (6): 5149–5155. Bibcode:2011OExpr..19.5149W. doi:10.1364/OE.19.005149. PMID 21445150.
  33. ^ . Archived from the original on 2008-09-20.
  34. ^ "Geodetic Systems, Inc". www.geodetic.com. Retrieved 2020-03-22.
  35. ^ "What Camera Should You Use for Photogrammetry?". 80.lv. 2019-07-15. Retrieved 2020-03-22.
  36. ^ . Gentle Giant Studios. Archived from the original on 2020-03-22. Retrieved 2020-03-22.
  37. ^ Semi-Automatic building extraction from LIDAR Data and High-Resolution Image
  38. ^ (PDF) (Report). p. 11. Archived from the original (PDF) on 14 September 2020. Retrieved 9 September 2019.
  39. ^ . Archived from the original on 2009-05-11. Retrieved 2009-09-09.
  40. ^ Haala, Norbert; Brenner, Claus; Anders, Karl-Heinrich (1998). "3D Urban GIS from Laser Altimeter and 2D Map Data" (PDF). Institute for Photogrammetry (IFP).
  41. ^ Ghent University, Department of Geography
  42. ^ "Glossary of 3d technology terms". 23 April 2018.
  43. ^ W. J. Walecki; F. Szondy; M. M. Hilali (2008). "Fast in-line surface topography metrology enabling stress calculation for solar cell manufacturing allowing throughput in excess of 2000 wafers per hour". Meas. Sci. Technol. 19 (2): 025302. doi:10.1088/0957-0233/19/2/025302. S2CID 121768537.
  44. ^
  45. ^ . Archived from the original on 2006-10-18. Retrieved 2009-09-09.
  46. ^ . Archived from the original on 2009-10-04. Retrieved 2009-10-31.
  47. ^ . Archived from the original on 2017-02-05. Retrieved 2021-01-24.
  48. ^ 3D data acquisition and object reconstruction using photos
  49. ^ (PDF) (Thesis). Archived from the original (PDF) on 2011-07-24. Retrieved 2009-09-09.
  50. ^ "Agisoft Metashape". www.agisoft.com. Retrieved 2017-03-13.
  51. ^ "RealityCapture". www.capturingreality.com/. Retrieved 2017-03-13.
  52. ^ (PDF). Archived from the original (PDF) on 2011-07-19. Retrieved 2009-09-09.
  53. ^ (PDF). Archived from the original (PDF) on 2007-12-20. Retrieved 2009-09-09.
  54. ^ Rottensteiner, Franz (2001). Semi-automatic extraction of buildings based on hybrid adjustment using 3D surface models and management of building data in a TIS. Inst. für Photogrammetrie u. Fernerkundung d. Techn. Univ. Wien. hdl:20.500.12708/373. ISBN 978-3-9500791-3-5.
  55. ^ Zhang, Zhengyou (September 1999). "Flexible camera calibration by viewing a plane from unknown orientations". Proceedings of the Seventh IEEE International Conference on Computer Vision. Vol. 1. pp. 666–673. doi:10.1109/ICCV.1999.791289. ISBN 0-7695-0164-8. S2CID 206769306.
  56. ^ (PDF). Archived from the original (PDF) on 2011-07-06. Retrieved 2009-09-09.
  57. ^ "Science of tele-robotic rock collection". European Space Agency. Retrieved 2020-01-03.
  58. ^ Scanning rocks, retrieved 2021-12-08
  59. ^ Larsson, Sören; Kjellander, J.A.P. (2006). "Motion control and data capturing for laser scanning with an industrial robot". Robotics and Autonomous Systems. 54 (6): 453–460. doi:10.1016/j.robot.2006.02.002.
  60. ^ Landmark detection by a rotary laser scanner for autonomous robot navigation in sewer pipes 2011-07-17 at the Wayback Machine, Matthias Dorn et al., Proceedings of the ICMIT 2003, the second International Conference on Mechatronics and Information Technology, pp. 600- 604, Jecheon, Korea, Dec. 2003
  61. ^ Remondino, Fabio (June 2011). "Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning". Remote Sensing. 3 (6): 1104–1138. Bibcode:2011RemS....3.1104R. doi:10.3390/rs3061104.
  62. ^ Bewley, Alex; Shekhar, Rajiv; Leonard, Sam; Upcroft, Ben; Lever, Paul (2011). "Real-time volume estimation of a dragline payload" (PDF). 2011 IEEE International Conference on Robotics and Automation. pp. 1571–1576. doi:10.1109/ICRA.2011.5979898. ISBN 978-1-61284-386-5. S2CID 8147627.
  63. ^ Men, Hao; Pochiraju, Kishore (2012). "Algorithms for 3D Map Segment Registration". In Khosrow-Pour, Mehdi (ed.). Geographic Information Systems: Concepts, Methodologies, Tools, and Applications: Concepts, Methodologies, Tools, and Applications. Vol. I. IGI Global. p. 502. ISBN 978-1-4666-2039-1.
  64. ^ Murphy, Liam. . Subsurface Laser Scanning Case Studies. Liam Murphy. Archived from the original on 2012-04-18. Retrieved 11 January 2012.
  65. ^ . Archived from the original on 2013-05-22. Retrieved 2012-01-11.
  66. ^ "The Future of 3D Modeling". GarageFarm. 2017-05-28. Retrieved 2017-05-28.
  67. ^ Curless, B., & Seitz, S. (2000). 3D Photography. Course Notes for SIGGRAPH 2000.
  68. ^ "Códigos QR y realidad aumentada: la evolución de las cartas en los restaurantes". La Vanguardia (in Spanish). 2021-02-07. Retrieved 2021-11-23.
  69. ^ "Crime Scene Documentation".
  70. ^ Lamine Mahdjoubi; Cletus Moobela; Richard Laing (December 2013). "Providing real-estate services through the integration of 3D laser scanning and building information modelling". Computers in Industry. 64 (9): 1272. doi:10.1016/j.compind.2013.09.003.
  71. ^ "Matterport Surpasses 70 Million Global Visits and Celebrates Explosive Growth of 3D and Virtual Reality Spaces". Market Watch. Retrieved 19 December 2016.
  72. ^ "The VR Glossary". 29 August 2016. Retrieved 26 April 2017.
  73. ^ Daniel A. Guttentag (October 2010). "Virtual reality: Applications and implications for tourism". Tourism Management. 31 (5): 637–651. doi:10.1016/j.tourman.2009.07.003.
  74. ^ Gillespie, Katie (May 11, 2018). "Virtual reality translates into real history for iTech Prep students". The Columbian. Retrieved 2021-12-09.
  75. ^ Cignoni, Paolo; Scopigno, Roberto (18 June 2008). "Sampled 3D models for CH applications: A viable and enabling new medium or just a technological exercise?". Journal on Computing and Cultural Heritage. 1 (1): 2:1–2:23. doi:10.1145/1367080.1367082. S2CID 16510261.
  76. ^ Wyatt-Spratt, Simon (2022-11-04). "After the Revolution: A Review of 3D Modelling as a Tool for Stone Artefact Analysis". Journal of Computer Applications in Archaeology. 5 (1): 215–237. doi:10.5334/jcaa.103. hdl:2123/30230. S2CID 253353315.
  77. ^ Magnani, Matthew; Douglass, Matthew; Schroder, Whittaker; Reeves, Jonathan; Braun, David R. (October 2020). "The Digital Revolution to Come: Photogrammetry in Archaeological Practice". American Antiquity. 85 (4): 737–760. doi:10.1017/aaq.2020.59. S2CID 225390638.
  78. ^ Scopigno, R.; Cignoni, P.; Pietroni, N.; Callieri, M.; Dellepiane, M. (January 2017). "Digital Fabrication Techniques for Cultural Heritage: A Survey: Fabrication Techniques for Cultural Heritage". Computer Graphics Forum. 36 (1): 6–21. doi:10.1111/cgf.12781. S2CID 26690232.
  79. ^ Lewis, M.; Oswald, C. (2019). "Can an Inexpensive Phone App Compare to Other Methods when It Comes to 3D Digitization of Ship Models". The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 4210: 107–111. Bibcode:2019ISPAr4210..107L. doi:10.5194/isprs-archives-XLII-2-W10-107-2019. S2CID 146021711. ProQuest 2585423206.
  80. ^ "Submit your artefact". www.imaginedmuseum.uk. Retrieved 2021-11-23.[permanent dead link]
  81. ^ "Scholarship in 3D: 3D scanning and printing at ASOR 2018". The Digital Orientalist. 2018-12-03. Retrieved 2021-11-23.
  82. ^ Marc Levoy; Kari Pulli; Brian Curless; Szymon Rusinkiewicz; David Koller; Lucas Pereira; Matt Ginzton; Sean Anderson; James Davis; Jeremy Ginsberg; Jonathan Shade; Duane Fulk (2000). "The Digital Michelangelo Project: 3D Scanning of Large Statues" (PDF). Proceedings of the 27th annual conference on Computer graphics and interactive techniques. pp. 131–144.
  83. ^ Roberto Scopigno; Susanna Bracci; Falletti, Franca; Mauro Matteini (2004). Exploring David. Diagnostic Tests and State of Conservation. Gruppo Editoriale Giunti. ISBN 978-88-09-03325-2.
  84. ^ David Luebke; Christopher Lutz; Rui Wang; Cliff Woolley (2002). "Scanning Monticello".
  85. ^ "Tontafeln 3D, Hetitologie Portal, Mainz, Germany" (in German). Retrieved 2019-06-23.
  86. ^ Kumar, S.; Snyder, D.; Duncan, D.; Cohen, J.; Cooper, J. (2003). "Digital preservation of ancient Cuneiform tablets using 3D-scanning". Fourth International Conference on 3-D Digital Imaging and Modeling, 2003. 3DIM 2003. Proceedings. pp. 326–333. doi:10.1109/IM.2003.1240266. ISBN 978-0-7695-1991-3. S2CID 676588.
  87. ^ Mara, Hubert; Krömker, Susanne; Jakob, Stefan; Breuckmann, Bernd (2010). "GigaMesh and Gilgamesh 3D Multiscale Integral Invariant Cuneiform Character Extraction". Vast: International Symposium on Virtual Reality. doi:10.2312/VAST/VAST10/131-138. ISBN 978-3-905674-29-3.
  88. ^ Mara, Hubert (2019-06-07), HeiCuBeDa Hilprecht – Heidelberg Cuneiform Benchmark Dataset for the Hilprecht Collection, heiDATA – institutional repository for research data of Heidelberg University, doi:10.11588/data/IE8CCN
  89. ^ Mara, Hubert (2019-06-07), HeiCu3Da Hilprecht – Heidelberg Cuneiform 3D Database - Hilprecht Collection, heidICON – Die Heidelberger Objekt- und Multimediadatenbank, doi:10.11588/heidicon.hilprecht
  90. ^ Mara, Hubert; Bogacz, Bartosz (2019). "Breaking the Code on Broken Tablets: The Learning Challenge for Annotated Cuneiform Script in Normalized 2D and 3D Datasets". 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 148–153. doi:10.1109/ICDAR.2019.00032. ISBN 978-1-7281-3014-9. S2CID 211026941.
  91. ^ Scott Cedarleaf (2010). . CyArk Blog. Archived from the original on 2010-03-30. Retrieved 2010-04-22.{{cite news}}: CS1 maint: numeric names: authors list (link)
  92. ^ Gabriele Guidi; Laura Micoli; Michele Russo; Bernard Frischer; Monica De Simone; Alessandro Spinetti; Luca Carosso (13–16 June 2005). "3D digitisation of a large model of imperial Rome". 5th international conference on 3-D digital imaging and modeling : 3DIM 2005, Ottawa, Ontario, Canada. Los Alamitos, CA: IEEE Computer Society. pp. 565–572. ISBN 0-7695-2327-7.
  93. ^ Payne, Emma Marie (2012). "Imaging Techniques in Conservation" (PDF). Journal of Conservation and Museum Studies. Ubiquity Press. 10 (2): 17–29. doi:10.5334/jcms.1021201.
  94. ^ "3D Body Scanner for Body Scanning in Medicine Field | Scantech". 2020-08-27. Retrieved 2023-11-15.
  95. ^ Iwanaga, Joe; Terada, Satoshi; Kim, Hee-Jin; Tabira, Yoko; Arakawa, Takamitsu; Watanabe, Koichi; Dumont, Aaron S.; Tubbs, R. Shane (September 2021). "Easy three-dimensional scanning technology for anatomy education using a free cellphone app". Clinical Anatomy. 34 (6): 910–918. doi:10.1002/ca.23753. PMID 33984162. S2CID 234497497.
  96. ^ 竹下, 俊治 (19 March 2021). "生物の形態観察における3Dスキャンアプリの活用" [Utilization of 3D scanning application for morphological observation of organisms]. 学校教育実践学研究 (in Japanese). 27. doi:10.15027/50609.
  97. ^ Gurses, Muhammet Enes; Gungor, Abuzer; Hanalioglu, Sahin; Yaltirik, Cumhur Kaan; Postuk, Hasan Cagri; Berker, Mustafa; Türe, Uğur (December 2021). "Qlone®: A Simple Method to Create 360-Degree Photogrammetry-Based 3-Dimensional Model of Cadaveric Specimens". Operative Neurosurgery. 21 (6): E488–E493. doi:10.1093/ons/opab355. PMID 34662905.
  98. ^ Christian Teutsch (2007). Model-based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners (PhD thesis).
  99. ^ "3D scanning technologies". Retrieved 2016-09-15.
  100. ^ Timeline of 3D Laser Scanners
  101. ^ (PDF). Archived from the original (PDF) on 2003-05-06. Retrieved 2009-09-09.
  102. ^ 3D data implementation to GIS maps
  103. ^ Zlatanova, Sisi (2008). "Working Group II — Acquisition — Position Paper: Data collection and 3D reconstruction". Advances in 3D Geoinformation Systems. Lecture Notes in Geoinformation and Cartography. pp. 425–428. doi:10.1007/978-3-540-72135-2_24. ISBN 978-3-540-72134-5.

scanning, process, analyzing, real, world, object, environment, collect, three, dimensional, data, shape, possibly, appearance, color, collected, data, then, used, construct, digital, models, making, model, viking, belt, buckle, using, hand, held, viuscan, las. 3D scanning is the process of analyzing a real world object or environment to collect three dimensional data of its shape and possibly its appearance e g color The collected data can then be used to construct digital 3D models Making a 3D model of a Viking belt buckle using a hand held VIUscan 3D laser scannerA 3D scanner can be based on many different technologies each with its own limitations advantages and costs Many limitations in the kind of objects that can be digitised are still present For example optical technology may encounter many difficulties with dark shiny reflective or transparent objects For example industrial computed tomography scanning structured light 3D scanners LiDAR and Time Of Flight 3D Scanners can be used to construct digital 3D models without destructive testing Collected 3D data is useful for a wide variety of applications These devices are used extensively by the entertainment industry in the production of movies and video games including virtual reality Other common applications of this technology include augmented reality 1 motion capture 2 3 gesture recognition 4 robotic mapping 5 industrial design orthotics and prosthetics 6 reverse engineering and prototyping quality control inspection and the digitization of cultural artifacts 7 Contents 1 Functionality 2 Technology 2 1 Contact 2 2 Non contact active 2 2 1 Time of flight 2 2 2 Triangulation 2 2 3 Strengths and weaknesses 2 2 4 Conoscopic holography 2 3 Hand held laser scanners 2 4 Structured light 2 5 Modulated light 2 6 Volumetric techniques 2 6 1 Medical 2 6 2 Industrial 2 7 Non contact passive 2 7 1 Photogrammetric non contact passive methods 2 8 Acquisition from acquired sensor data 2 9 Acquisition from on site sensors 2 10 Cost 3 Reconstruction 3 1 From point clouds 3 2 From models 3 3 From a set of 2D slices 3 4 From laser scans 3 5 From photographs 4 Applications 4 1 Space experiments 4 2 Construction industry and civil engineering 4 3 Design process 4 4 Entertainment 4 5 3D photography 4 6 Law enforcement 4 7 Reverse engineering 4 8 Real estate 4 9 Virtual remote tourism 4 10 Cultural heritage 4 10 1 Michelangelo 4 10 2 Monticello 4 10 3 Cuneiform tablets 4 10 4 Kasubi Tombs 4 10 5 Plastico di Roma antica 4 10 6 Other projects 4 11 Medical CAD CAM 4 12 Quality assurance and industrial metrology 4 13 Circumvention of shipping costs and international import export tariffs 5 Object reconstruction 6 Software 7 See also 8 ReferencesFunctionality editThe purpose of a 3D scanner is usually to create a 3D model This 3D model consists of a polygon mesh or point cloud of geometric samples on the surface of the subject These points can then be used to extrapolate the shape of the subject a process called reconstruction If colour information is collected at each point then the colours or textures on the surface of the subject can also be determined 3D scanners share several traits with cameras Like most cameras they have a cone like field of view and like cameras they can only collect information about surfaces that are not obscured While a camera collects colour information about surfaces within its field of view a 3D scanner collects distance information about surfaces within its field of view The picture produced by a 3D scanner describes the distance to a surface at each point in the picture This allows the three dimensional position of each point in the picture to be identified In some situations a single scan will not produce a complete model of the subject Multiple scans from different directions are usually helpful to obtain information about all sides of the subject These scans have to be brought into a common reference system a process that is usually called alignment or registration and then merged to create a complete 3D model This whole process going from the single range map to the whole model is usually known as the 3D scanning pipeline 8 9 10 11 12 Technology editThere are a variety of technologies for digitally acquiring the shape of a 3D object The techniques work with most or all sensor types including optical acoustic laser scanning 13 radar thermal 14 and seismic 15 16 A well established classification 17 divides them into two types contact and non contact Non contact solutions can be further divided into two main categories active and passive There are a variety of technologies that fall under each of these categories Contact edit nbsp A coordinate measuring machine CMM with scanning head nbsp 3D scanning of a fin whale skeleton in the Natural History Museum of Slovenia August 2013 Contact 3D scanners work by physically probing touching the part and recording the position of the sensor as the probe moves around the part There are two main types of contact 3D scanners Coordinate measuring machines CMMs which traditionally have 3 perpendicular moving axis with a touch probe mounted on the Z axis As the touch probe moves around the part sensors on each axis record the position to generate XYZ coordinates Modern CMMs are 5 axis systems with the two extra axes provided by pivoting sensor heads CMMs are the most accurate form of 3D measurement achieving micron precision The greatest advantage of a CMM after accuracy is that it can be run in autonomous CNC mode or as a manual probing system The disadvantage of CMMs is that their upfront cost and the technical knowledge required to operate them Articulated Arms which generally have multiple segments with polar sensors on each joint As per the CMM as the articulated arm moves around the part sensors record their position and the location of the end of the arm is calculated using complex math and the wrist rotation angle and hinge angle of each joint While not usually as accurate as CMMs articulated arms still achieve high accuracy and are cheaper and slightly easier to use They do not usually have CNC options Both modern CMMs and Articulated Arms can also be fitted with non contact laser scanners instead of touch probes Non contact active edit Active scanners emit some kind of radiation or light and detect its reflection or radiation passing through object in order to probe an object or environment Possible types of emissions used include light ultrasound or x ray Time of flight edit nbsp This lidar scanner may be used to scan buildings rock formations etc to produce a 3D model The lidar can aim its laser beam in a wide range its head rotates horizontally a mirror flips vertically The laser beam is used to measure the distance to the first object on its path The time of flight 3D laser scanner is an active scanner that uses laser light to probe the subject At the heart of this type of scanner is a time of flight laser range finder The laser range finder finds the distance of a surface by timing the round trip time of a pulse of light A laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured Since the speed of light c displaystyle c nbsp is known the round trip time determines the travel distance of the light which is twice the distance between the scanner and the surface If t displaystyle t nbsp is the round trip time then distance is equal to c t 2 displaystyle textstyle c cdot t 2 nbsp The accuracy of a time of flight 3D laser scanner depends on how precisely we can measure the t displaystyle t nbsp time 3 3 picoseconds approx is the time taken for light to travel 1 millimetre The laser range finder only detects the distance of one point in its direction of view Thus the scanner scans its entire field of view one point at a time by changing the range finder s direction of view to scan different points The view direction of the laser range finder can be changed either by rotating the range finder itself or by using a system of rotating mirrors The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy Typical time of flight 3D laser scanners can measure the distance of 10 000 100 000 points every second Time of flight devices are also available in a 2D configuration This is referred to as a time of flight camera 18 Triangulation edit nbsp Principle of a laser triangulation sensor Two object positions are shown Triangulation based 3D laser scanners are also active scanners that use laser light to probe the environment With respect to time of flight 3D laser scanner the triangulation laser shines a laser on the subject and exploits a camera to look for the location of the laser dot Depending on how far away the laser strikes a surface the laser dot appears at different places in the camera s field of view This technique is called triangulation because the laser dot the camera and the laser emitter form a triangle The length of one side of the triangle the distance between the camera and the laser emitter is known The angle of the laser emitter corner is also known The angle of the camera corner can be determined by looking at the location of the laser dot in the camera s field of view These three pieces of information fully determine the shape and size of the triangle and give the location of the laser dot corner of the triangle 19 In most cases a laser stripe instead of a single laser dot is swept across the object to speed up the acquisition process The National Research Council of Canada was among the first institutes to develop the triangulation based laser scanning technology in 1978 20 Strengths and weaknesses edit Time of flight and triangulation range finders each have strengths and weaknesses that make them suitable for different situations The advantage of time of flight range finders is that they are capable of operating over very long distances on the order of kilometres These scanners are thus suitable for scanning large structures like buildings or geographic features The disadvantage of time of flight range finders is their accuracy Due to the high speed of light timing the round trip time is difficult and the accuracy of the distance measurement is relatively low on the order of millimetres Triangulation range finders are exactly the opposite They have a limited range of some meters but their accuracy is relatively high The accuracy of triangulation range finders is on the order of tens of micrometers Time of flight scanners accuracy can be lost when the laser hits the edge of an object because the information that is sent back to the scanner is from two different locations for one laser pulse The coordinate relative to the scanner s position for a point that has hit the edge of an object will be calculated based on an average and therefore will put the point in the wrong place When using a high resolution scan on an object the chances of the beam hitting an edge are increased and the resulting data will show noise just behind the edges of the object Scanners with a smaller beam width will help to solve this problem but will be limited by range as the beam width will increase over distance Software can also help by determining that the first object to be hit by the laser beam should cancel out the second At a rate of 10 000 sample points per second low resolution scans can take less than a second but high resolution scans requiring millions of samples can take minutes for some time of flight scanners The problem this creates is distortion from motion Since each point is sampled at a different time any motion in the subject or the scanner will distort the collected data Thus it is usually necessary to mount both the subject and the scanner on stable platforms and minimise vibration Using these scanners to scan objects in motion is very difficult Recently there has been research on compensating for distortion from small amounts of vibration 21 and distortions due to motion and or rotation 22 Short range laser scanners can not usually encompass a depth of field more than 1 meter 23 When scanning in one position for any length of time slight movement can occur in the scanner position due to changes in temperature If the scanner is set on a tripod and there is strong sunlight on one side of the scanner then that side of the tripod will expand and slowly distort the scan data from one side to another Some laser scanners have level compensators built into them to counteract any movement of the scanner during the scan process Conoscopic holography edit In a conoscopic system a laser beam is projected onto the surface and then the immediate reflection along the same ray path are put through a conoscopic crystal and projected onto a CCD The result is a diffraction pattern that can be frequency analyzed to determine the distance to the measured surface The main advantage with conoscopic holography is that only a single ray path is needed for measuring thus giving an opportunity to measure for instance the depth of a finely drilled hole 24 Hand held laser scanners edit Hand held laser scanners create a 3D image through the triangulation mechanism described above a laser dot or line is projected onto an object from a hand held device and a sensor typically a charge coupled device or position sensitive device measures the distance to the surface Data is collected in relation to an internal coordinate system and therefore to collect data where the scanner is in motion the position of the scanner must be determined The position can be determined by the scanner using reference features on the surface being scanned typically adhesive reflective tabs but natural features have been also used in research work 25 26 or by using an external tracking method External tracking often takes the form of a laser tracker to provide the sensor position with integrated camera to determine the orientation of the scanner or a photogrammetric solution using 3 or more cameras providing the complete six degrees of freedom of the scanner Both techniques tend to use infrared light emitting diodes attached to the scanner which are seen by the camera s through filters providing resilience to ambient lighting 27 Data is collected by a computer and recorded as data points within three dimensional space with processing this can be converted into a triangulated mesh and then a computer aided design model often as non uniform rational B spline surfaces Hand held laser scanners can combine this data with passive visible light sensors which capture surface textures and colors to build or reverse engineer a full 3D model Structured light edit Main article Structured light 3D scanner Structured light 3D scanners project a pattern of light on the subject and look at the deformation of the pattern on the subject The pattern is projected onto the subject using either an LCD projector or other stable light source A camera offset slightly from the pattern projector looks at the shape of the pattern and calculates the distance of every point in the field of view Structured light scanning is still a very active area of research with many research papers published each year Perfect maps have also been proven useful as structured light patterns that solve the correspondence problem and allow for error detection and error correction 28 The advantage of structured light 3D scanners is speed and precision Instead of scanning one point at a time structured light scanners scan multiple points or the entire field of view at once Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion Some existing systems are capable of scanning moving objects in real time A real time scanner using digital fringe projection and phase shifting technique certain kinds of structured light methods was developed to capture reconstruct and render high density details of dynamically deformable objects such as facial expressions at 40 frames per second 29 Recently another scanner has been developed Different patterns can be applied to this system and the frame rate for capturing and data processing achieves 120 frames per second It can also scan isolated surfaces for example two moving hands 30 By utilising the binary defocusing technique speed breakthroughs have been made that could reach hundreds 31 to thousands of frames per second 32 Modulated light edit Modulated light 3D scanners shine a continually changing light at the subject Usually the light source simply cycles its amplitude in a sinusoidal pattern A camera detects the reflected light and the amount the pattern is shifted by determines the distance the light travelled Modulated light also allows the scanner to ignore light from sources other than a laser so there is no interference Volumetric techniques edit Medical edit Computed tomography CT is a medical imaging method which generates a three dimensional image of the inside of an object from a large series of two dimensional X ray images similarly magnetic resonance imaging is another medical imaging technique that provides much greater contrast between the different soft tissues of the body than computed tomography CT does making it especially useful in neurological brain musculoskeletal cardiovascular and oncological cancer imaging These techniques produce a discrete 3D volumetric representation that can be directly visualised manipulated or converted to traditional 3D surface by mean of isosurface extraction algorithms Industrial edit Although most common in medicine industrial computed tomography microtomography and MRI are also used in other fields for acquiring a digital representation of an object and its interior such as non destructive materials testing reverse engineering or studying biological and paleontological specimens Non contact passive edit Passive 3D imaging solutions do not emit any kind of radiation themselves but instead rely on detecting reflected ambient radiation Most solutions of this type detect visible light because it is a readily available ambient radiation Other types of radiation such as infrared could also be used Passive methods can be very cheap because in most cases they do not need particular hardware but simple digital cameras Stereoscopic systems usually employ two video cameras slightly apart looking at the same scene By analysing the slight differences between the images seen by each camera it is possible to determine the distance at each point in the images This method is based on the same principles driving human stereoscopic vision 33 Photometric systems usually use a single camera but take multiple images under varying lighting conditions These techniques attempt to invert the image formation model in order to recover the surface orientation at each pixel Silhouette techniques use outlines created from a sequence of photographs around a three dimensional object against a well contrasted background These silhouettes are extruded and intersected to form the visual hull approximation of the object With these approaches some concavities of an object like the interior of a bowl cannot be detected Photogrammetric non contact passive methods edit This section needs expansion You can help by adding to it March 2020 Main article Photogrammetry nbsp Images taken from multiple perspectives such as a fixed camera array can be taken of a subject for a photogrammetric reconstruction pipeline to generate a 3D mesh or point cloud Photogrammetry provides reliable information about 3D shapes of physical objects based on analysis of photographic images The resulting 3D data is typically provided as a 3D point cloud 3D mesh or 3D points 34 Modern photogrammetry software applications automatically analyze a large number of digital images for 3D reconstruction however manual interaction may be required if the software cannot automatically determine the 3D positions of the camera in the images which is an essential step in the reconstruction pipeline Various software packages are available including PhotoModeler Geodetic Systems Autodesk ReCap RealityCapture and Agisoft Metashape see comparison of photogrammetry software Close range photogrammetry typically uses a handheld camera such as a DSLR with a fixed focal length lens to capture images of objects for 3D reconstruction 35 Subjects include smaller objects such as a building facade vehicles sculptures rocks and shoes Camera Arrays can be used to generate 3D point clouds or meshes of live objects such as people or pets by synchronizing multiple cameras to photograph a subject from multiple perspectives at the same time for 3D object reconstruction 36 Wide angle photogrammetry can be used to capture the interior of buildings or enclosed spaces using a wide angle lens camera such as a 360 camera Aerial photogrammetry uses aerial images acquired by satellite commercial aircraft or UAV drone to collect images of buildings structures and terrain for 3D reconstruction into a point cloud or mesh Acquisition from acquired sensor data edit Semi automatic building extraction from lidar data and high resolution images is also a possibility Again this approach allows modelling without physically moving towards the location or object 37 From airborne lidar data digital surface model DSM can be generated and then the objects higher than the ground are automatically detected from the DSM Based on general knowledge about buildings geometric characteristics such as size height and shape information are then used to separate the buildings from other objects The extracted building outlines are then simplified using an orthogonal algorithm to obtain better cartographic quality Watershed analysis can be conducted to extract the ridgelines of building roofs The ridgelines as well as slope information are used to classify the buildings per type The buildings are then reconstructed using three parametric building models flat gabled hipped 38 Acquisition from on site sensors edit Lidar and other terrestrial laser scanning technology 39 offers the fastest automated way to collect height or distance information lidar or laser for height measurement of buildings is becoming very promising 40 Commercial applications of both airborne lidar and ground laser scanning technology have proven to be fast and accurate methods for building height extraction The building extraction task is needed to determine building locations ground elevation orientations building size rooftop heights etc Most buildings are described to sufficient details in terms of general polyhedra i e their boundaries can be represented by a set of planar surfaces and straight lines Further processing such as expressing building footprints as polygons is used for data storing in GIS databases Using laser scans and images taken from ground level and a bird s eye perspective Fruh and Zakhor present an approach to automatically create textured 3D city models This approach involves registering and merging the detailed facade models with a complementary airborne model The airborne modeling process generates a half meter resolution model with a bird s eye view of the entire area containing terrain profile and building tops Ground based modeling process results in a detailed model of the building facades Using the DSM obtained from airborne laser scans they localize the acquisition vehicle and register the ground based facades to the airborne model by means of Monte Carlo localization MCL Finally the two models are merged with different resolutions to obtain a 3D model Using an airborne laser altimeter Haala Brenner and Anders combined height data with the existing ground plans of buildings The ground plans of buildings had already been acquired either in analog form by maps and plans or digitally in a 2D GIS The project was done in order to enable an automatic data capture by the integration of these different types of information Afterwards virtual reality city models are generated in the project by texture processing e g by mapping of terrestrial images The project demonstrated the feasibility of rapid acquisition of 3D urban GIS Ground plans proved are another very important source of information for 3D building reconstruction Compared to results of automatic procedures these ground plans proved more reliable since they contain aggregated information which has been made explicit by human interpretation For this reason ground plans can considerably reduce costs in a reconstruction project An example of existing ground plan data usable in building reconstruction is the Digital Cadastral map which provides information on the distribution of property including the borders of all agricultural areas and the ground plans of existing buildings Additionally information as street names and the usage of buildings e g garage residential building office block industrial building church is provided in the form of text symbols At the moment the Digital Cadastral map is built up as a database covering an area mainly composed by digitizing preexisting maps or plans Cost edit Terrestrial laser scan devices pulse or phase devices processing software generally start at a price of 150 000 Some less precise devices as the Trimble VX cost around 75 000 Terrestrial lidar systems cost around 300 000 Systems using regular still cameras mounted on RC helicopters Photogrammetry are also possible and cost around 25 000 Systems that use still cameras with balloons are even cheaper around 2 500 but require additional manual processing As the manual processing takes around one month of labor for every day of taking pictures this is still an expensive solution in the long run Obtaining satellite images is also an expensive endeavor High resolution stereo images 0 5 m resolution cost around 11 000 Image satellites include Quikbird Ikonos High resolution monoscopic images cost around 5 500 Somewhat lower resolution images e g from the CORONA satellite with a 2 m resolution cost around 1 000 per 2 images Note that Google Earth images are too low in resolution to make an accurate 3D model 41 Reconstruction editMain article 3D reconstruction From point clouds edit The point clouds produced by 3D scanners and 3D imaging can be used directly for measurement and visualisation in the architecture and construction world From models edit Most applications however use instead polygonal 3D models NURBS surface models or editable feature based CAD models aka solid models Polygon mesh models In a polygonal representation of a shape a curved surface is modeled as many small faceted flat surfaces think of a sphere modeled as a disco ball Polygon models also called Mesh models are useful for visualisation for some CAM i e machining but are generally heavy i e very large data sets and are relatively un editable in this form Reconstruction to polygonal model involves finding and connecting adjacent points with straight lines in order to create a continuous surface Many applications both free and nonfree are available for this purpose e g GigaMesh MeshLab PointCab kubit PointCloud for AutoCAD Reconstructor imagemodel PolyWorks Rapidform Geomagic Imageware Rhino 3D etc Surface models The next level of sophistication in modeling involves using a quilt of curved surface patches to model the shape These might be NURBS TSplines or other curved representations of curved topology Using NURBS the spherical shape becomes a true mathematical sphere Some applications offer patch layout by hand but the best in class offer both automated patch layout and manual layout These patches have the advantage of being lighter and more manipulable when exported to CAD Surface models are somewhat editable but only in a sculptural sense of pushing and pulling to deform the surface This representation lends itself well to modelling organic and artistic shapes Providers of surface modellers include Rapidform Geomagic Rhino 3D Maya T Splines etc Solid CAD models From an engineering manufacturing perspective the ultimate representation of a digitised shape is the editable parametric CAD model In CAD the sphere is described by parametric features which are easily edited by changing a value e g centre point and radius These CAD models describe not simply the envelope or shape of the object but CAD models also embody the design intent i e critical features and their relationship to other features An example of design intent not evident in the shape alone might be a brake drum s lug bolts which must be concentric with the hole in the centre of the drum This knowledge would drive the sequence and method of creating the CAD model a designer with an awareness of this relationship would not design the lug bolts referenced to the outside diameter but instead to the center A modeler creating a CAD model will want to include both Shape and design intent in the complete CAD model Vendors offer different approaches to getting to the parametric CAD model Some export the NURBS surfaces and leave it to the CAD designer to complete the model in CAD e g Geomagic Imageware Rhino 3D Others use the scan data to create an editable and verifiable feature based model that is imported into CAD with full feature tree intact yielding a complete native CAD model capturing both shape and design intent e g Geomagic Rapidform For instance the market offers various plug ins for established CAD programs such as SolidWorks Xtract3D DezignWorks and Geomagic for SolidWorks allow manipulating a 3D scan directly inside SolidWorks Still other CAD applications are robust enough to manipulate limited points or polygon models within the CAD environment e g CATIA AutoCAD Revit From a set of 2D slices edit nbsp 3D reconstruction of the brain and eyeballs from CT scanned DICOM images In this image areas with the density of bone or air were made transparent and the slices stacked up in an approximate free space alignment The outer ring of material around the brain are the soft tissues of skin and muscle on the outside of the skull A black box encloses the slices to provide the black background Since these are simply 2D images stacked up when viewed on edge the slices disappear since they have effectively zero thickness Each DICOM scan represents about 5 mm of material averaged into a thin slice CT industrial CT MRI or micro CT scanners do not produce point clouds but a set of 2D slices each termed a tomogram which are then stacked together to produce a 3D representation There are several ways to do this depending on the output required Volume rendering Different parts of an object usually have different threshold values or greyscale densities From this a 3 dimensional model can be constructed and displayed on screen Multiple models can be constructed from various thresholds allowing different colours to represent each component of the object Volume rendering is usually only used for visualisation of the scanned object Image segmentation Where different structures have similar threshold greyscale values it can become impossible to separate them simply by adjusting volume rendering parameters The solution is called segmentation a manual or automatic procedure that can remove the unwanted structures from the image Image segmentation software usually allows export of the segmented structures in CAD or STL format for further manipulation Image based meshing When using 3D image data for computational analysis e g CFD and FEA simply segmenting the data and meshing from CAD can become time consuming and virtually intractable for the complex topologies typical of image data The solution is called image based meshing an automated process of generating an accurate and realistic geometrical description of the scan data From laser scans edit Laser scanning describes the general method to sample or scan a surface using laser technology Several areas of application exist that mainly differ in the power of the lasers that are used and in the results of the scanning process Low laser power is used when the scanned surface doesn t have to be influenced e g when it only has to be digitised Confocal or 3D laser scanning are methods to get information about the scanned surface Another low power application uses structured light projection systems for solar cell flatness metrology 42 enabling stress calculation throughout in excess of 2000 wafers per hour 43 The laser power used for laser scanning equipment in industrial applications is typically less than 1W The power level is usually on the order of 200 mW or less but sometimes more From photographs edit See also Photogrammetry 3D data acquisition and object reconstruction can be performed using stereo image pairs Stereo photogrammetry or photogrammetry based on a block of overlapped images is the primary approach for 3D mapping and object reconstruction using 2D images Close range photogrammetry has also matured to the level where cameras or digital cameras can be used to capture the close look images of objects e g buildings and reconstruct them using the very same theory as the aerial photogrammetry An example of software which could do this is Vexcel FotoG 5 44 45 This software has now been replaced by Vexcel GeoSynth 46 Another similar software program is Microsoft Photosynth 47 48 A semi automatic method for acquiring 3D topologically structured data from 2D aerial stereo images has been presented by Sisi Zlatanova 49 The process involves the manual digitizing of a number of points necessary for automatically reconstructing the 3D objects Each reconstructed object is validated by superimposition of its wire frame graphics in the stereo model The topologically structured 3D data is stored in a database and are also used for visualization of the objects Notable software used for 3D data acquisition using 2D images include e g Agisoft Metashape 50 RealityCapture 51 and ENSAIS Engineering College TIPHON Traitement d Image et PHOtogrammetrie Numerique 52 A method for semi automatic building extraction together with a concept for storing building models alongside terrain and other topographic data in a topographical information system has been developed by Franz Rottensteiner His approach was based on the integration of building parameter estimations into the photogrammetry process applying a hybrid modeling scheme Buildings are decomposed into a set of simple primitives that are reconstructed individually and are then combined by Boolean operators The internal data structure of both the primitives and the compound building models are based on the boundary representation methods 53 54 Multiple images are used in Zhang s 55 approach to surface reconstruction from multiple images A central idea is to explore the integration of both 3D stereo data and 2D calibrated images This approach is motivated by the fact that only robust and accurate feature points that survived the geometry scrutiny of multiple images are reconstructed in space The density insufficiency and the inevitable holes in the stereo data should then be filled in by using information from multiple images The idea is thus to first construct small surface patches from stereo points then to progressively propagate only reliable patches in their neighborhood from images into the whole surface using a best first strategy The problem thus reduces to searching for an optimal local surface patch going through a given set of stereo points from images Multi spectral images are also used for 3D building detection The first and last pulse data and the normalized difference vegetation index are used in the process 56 New measurement techniques are also employed to obtain measurements of and between objects from single images by using the projection or the shadow as well as their combination This technology is gaining attention given its fast processing time and far lower cost than stereo measurements citation needed Applications editSpace experiments edit 3D scanning technology has been used to scan space rocks for the European Space Agency 57 58 Construction industry and civil engineering edit Robotic control e g a laser scanner may function as the eye of a robot 59 60 As built drawings of bridges industrial plants and monuments Documentation of historical sites 61 Site modelling and lay outing Quality control Quantity surveys Payload monitoring 62 Freeway redesign Establishing a bench mark of pre existing shape state in order to detect structural changes resulting from exposure to extreme loadings such as earthquake vessel truck impact or fire Create GIS geographic information system maps 63 and geomatics Subsurface laser scanning in mines and karst voids 64 Forensic documentation 65 Design process edit Increasing accuracy working with complex parts and shapes Coordinating product design using parts from multiple sources Updating old CD scans with those from more current technology Replacing missing or older parts Creating cost savings by allowing as built design services for example in automotive manufacturing plants Bringing the plant to the engineers with web shared scans and Saving travel costs Entertainment edit 3D scanners are used by the entertainment industry to create digital 3D models for movies video games and leisure purposes 66 They are heavily utilized in virtual cinematography In cases where a real world equivalent of a model exists it is much faster to scan the real world object than to manually create a model using 3D modeling software Frequently artists sculpt physical models of what they want and scan them into digital form rather than directly creating digital models on a computer 3D photography edit nbsp 3D selfie in 1 20 scale printed by Shapeways using gypsum based printing created by Madurodam miniature park from 2D pictures taken at its Fantasitron photo booth nbsp Fantasitron 3D photo booth at Madurodam3D scanners are evolving for the use of cameras to represent 3D objects in an accurate manner 67 Companies are emerging since 2010 that create 3D portraits of people 3D figurines or 3D selfie An augmented reality menu for the Madrid restaurant chain 80 Degrees 68 Law enforcement edit 3D laser scanning is used by the law enforcement agencies around the world 3D models are used for on site documentation of 69 Crime scenes Bullet trajectories Bloodstain pattern analysis Accident reconstruction Bombings Plane crashes and moreReverse engineering edit Reverse engineering of a mechanical component requires a precise digital model of the objects to be reproduced Rather than a set of points a precise digital model can be represented by a polygon mesh a set of flat or curved NURBS surfaces or ideally for mechanical components a CAD solid model A 3D scanner can be used to digitise free form or gradually changing shaped components as well as prismatic geometries whereas a coordinate measuring machine is usually used only to determine simple dimensions of a highly prismatic model These data points are then processed to create a usable digital model usually using specialized reverse engineering software Real estate edit Land or buildings can be scanned into a 3D model which allows buyers to tour and inspect the property remotely anywhere without having to be present at the property 70 There is already at least one company providing 3D scanned virtual real estate tours 71 A typical virtual tour Archived 2017 04 27 at the Wayback Machine would consist of dollhouse view 72 inside view as well as a floor plan Virtual remote tourism edit The environment at a place of interest can be captured and converted into a 3D model This model can then be explored by the public either through a VR interface or a traditional 2D interface This allows the user to explore locations which are inconvenient for travel 73 A group of history students at Vancouver iTech Preparatory Middle School created a Virtual Museum by 3D Scanning more than 100 artifacts 74 Cultural heritage edit There have been many research projects undertaken via the scanning of historical sites and artifacts both for documentation and analysis purposes 75 The resulting models can be used for a variety of different analytical approaches 76 77 The combined use of 3D scanning and 3D printing technologies allows the replication of real objects without the use of traditional plaster casting techniques that in many cases can be too invasive for being performed on precious or delicate cultural heritage artifacts 78 In an example of a typical application scenario a gargoyle model was digitally acquired using a 3D scanner and the produced 3D data was processed using MeshLab The resulting digital 3D model was fed to a rapid prototyping machine to create a real resin replica of the original object Creation of 3D models for Museums and Archaeological artifacts 79 80 81 Michelangelo edit In 1999 two different research groups started scanning Michelangelo s statues Stanford University with a group led by Marc Levoy 82 used a custom laser triangulation scanner built by Cyberware to scan Michelangelo s statues in Florence notably the David the Prigioni and the four statues in The Medici Chapel The scans produced a data point density of one sample per 0 25 mm detailed enough to see Michelangelo s chisel marks These detailed scans produced a large amount of data up to 32 gigabytes and processing the data from his scans took 5 months Approximately in the same period a research group from IBM led by H Rushmeier and F Bernardini scanned the Pieta of Florence acquiring both geometric and colour details The digital model result of the Stanford scanning campaign was thoroughly used in the 2004 subsequent restoration of the statue 83 Monticello edit In 2002 David Luebke et al scanned Thomas Jefferson s Monticello 84 A commercial time of flight laser scanner the DeltaSphere 3000 was used The scanner data was later combined with colour data from digital photographs to create the Virtual Monticello and the Jefferson s Cabinet exhibits in the New Orleans Museum of Art in 2003 The Virtual Monticello exhibit simulated a window looking into Jefferson s Library The exhibit consisted of a rear projection display on a wall and a pair of stereo glasses for the viewer The glasses combined with polarised projectors provided a 3D effect Position tracking hardware on the glasses allowed the display to adapt as the viewer moves around creating the illusion that the display is actually a hole in the wall looking into Jefferson s Library The Jefferson s Cabinet exhibit was a barrier stereogram essentially a non active hologram that appears different from different angles of Jefferson s Cabinet Cuneiform tablets edit The first 3D models of cuneiform tablets were acquired in Germany in 2000 85 In 2003 the so called Digital Hammurabi project acquired cuneiform tablets with a laser triangulation scanner using a regular grid pattern having a resolution of 0 025 mm 0 00098 in 86 With the use of high resolution 3D scanners by the Heidelberg University for tablet acquisition in 2009 the development of the GigaMesh Software Framework began to visualize and extract cuneiform characters from 3D models 87 It was used to process ca 2 000 3D digitized tablets of the Hilprecht Collection in Jena to create an Open Access benchmark dataset 88 and an annotated collection 89 of 3D models of tablets freely available under CC BY licenses 90 Kasubi Tombs edit A 2009 CyArk 3D scanning project at Uganda s historic Kasubi Tombs a UNESCO World Heritage Site using a Leica HDS 4500 produced detailed architectural models of Muzibu Azaala Mpanga the main building at the complex and tomb of the Kabakas Kings of Uganda A fire on March 16 2010 burned down much of the Muzibu Azaala Mpanga structure and reconstruction work is likely to lean heavily upon the dataset produced by the 3D scan mission 91 Plastico di Roma antica edit In 2005 Gabriele Guidi et al scanned the Plastico di Roma antica 92 a model of Rome created in the last century Neither the triangulation method nor the time of flight method satisfied the requirements of this project because the item to be scanned was both large and contained small details They found though that a modulated light scanner was able to provide both the ability to scan an object the size of the model and the accuracy that was needed The modulated light scanner was supplemented by a triangulation scanner which was used to scan some parts of the model Other projects edit The 3D Encounters Project at the Petrie Museum of Egyptian Archaeology aims to use 3D laser scanning to create a high quality 3D image library of artefacts and enable digital travelling exhibitions of fragile Egyptian artefacts English Heritage has investigated the use of 3D laser scanning for a wide range of applications to gain archaeological and condition data and the National Conservation Centre in Liverpool has also produced 3D laser scans on commission including portable object and in situ scans of archaeological sites 93 The Smithsonian Institution has a project called Smithsonian X 3D notable for the breadth of types of 3D objects they are attempting to scan These include small objects such as insects and flowers to human sized objects such as Amelia Earhart s Flight Suit to room sized objects such as the Gunboat Philadelphia to historic sites such as Liang Bua in Indonesia Also of note the data from these scans is being made available to the public for free and downloadable in several data formats Medical CAD CAM edit 3D scanners are used to capture the 3D shape of a patient in orthotics and dentistry It gradually supplants tedious plaster cast CAD CAM software are then used to design and manufacture the orthosis prosthesis 94 or dental implants Many Chairside dental CAD CAM systems and Dental Laboratory CAD CAM systems use 3D Scanner technologies to capture the 3D surface of a dental preparation either in vivo or in vitro in order to produce a restoration digitally using CAD software and ultimately produce the final restoration using a CAM technology such as a CNC milling machine or 3D printer The chairside systems are designed to facilitate the 3D scanning of a preparation in vivo and produce the restoration such as a Crown Onlay Inlay or Veneer Creation of 3D models for Anatomy and Biology education 95 96 and cadaver models for educational neurosurgical simulations 97 Quality assurance and industrial metrology edit The digitalisation of real world objects is of vital importance in various application domains This method is especially applied in industrial quality assurance to measure the geometric dimension accuracy Industrial processes such as assembly are complex highly automated and typically based on CAD computer aided design data The problem is that the same degree of automation is also required for quality assurance It is for example a very complex task to assemble a modern car since it consists of many parts that must fit together at the very end of the production line The optimal performance of this process is guaranteed by quality assurance systems Especially the geometry of the metal parts must be checked in order to assure that they have the correct dimensions fit together and finally work reliably Within highly automated processes the resulting geometric measures are transferred to machines that manufacture the desired objects Due to mechanical uncertainties and abrasions the result may differ from its digital nominal In order to automatically capture and evaluate these deviations the manufactured part must be digitised as well For this purpose 3D scanners are applied to generate point samples from the object s surface which are finally compared against the nominal data 98 The process of comparing 3D data against a CAD model is referred to as CAD Compare and can be a useful technique for applications such as determining wear patterns on moulds and tooling determining accuracy of final build analysing gap and flush or analysing highly complex sculpted surfaces At present laser triangulation scanners structured light and contact scanning are the predominant technologies employed for industrial purposes with contact scanning remaining the slowest but overall most accurate option Nevertheless 3D scanning technology offers distinct advantages compared to traditional touch probe measurements White light or laser scanners accurately digitize objects all around capturing fine details and freeform surfaces without reference points or spray The entire surface is covered at record speed without the risk of damaging the part Graphic comparison charts illustrate geometric deviations of full object level providing deeper insights into potential causes 99 100 Circumvention of shipping costs and international import export tariffs edit 3D scanning can be used in conjunction with 3D printing technology to virtually teleport certain object across distances without the need of shipping them and in some cases incurring import export tariffs For example a plastic object can be 3D scanned in the United States the files can be sent off to a 3D printing facility over in Germany where the object is replicated effectively teleporting the object across the globe In the future as 3D scanning and 3D printing technologies become more and more prevalent governments around the world will need to reconsider and rewrite trade agreements and international laws Object reconstruction editMain article 3D reconstruction See also 3D reconstruction from multiple images After the data has been collected the acquired and sometimes already processed data from images or sensors needs to be reconstructed This may be done in the same program or in some cases the 3D data needs to be exported and imported into another program for further refining and or to add additional data Such additional data could be GPS location data After the reconstruction the data might be directly implemented into a local GIS map 101 102 or a worldwide map such as Google Earth or Apple Maps Software editSeveral software packages are used in which the acquired and sometimes already processed data from images or sensors is imported Notable software packages include 103 Qlone 3DF Zephyr Canoma Leica Photogrammetry Suite MeshLab MountainsMap SEM microscopy applications only PhotoModeler SketchUp tomvizSee also edit3D computer graphics software 3D printing 3D reconstruction 3D selfie Angle sensitive pixel Depth map Digitization Epipolar geometry Full body scanner Image reconstruction Light field camera Photogrammetry Range imaging Remote sensing Structured light 3D scanner ThingiverseReferences edit Izadi Shahram Davison Andrew Fitzgibbon Andrew Kim David Hilliges Otmar Molyneaux David Newcombe Richard Kohli Pushmeet Shotton Jamie Hodges Steve Freeman Dustin 2011 Kinect Fusion Proceedings of the 24th annual ACM symposium on User interface software and technology UIST 11 p 559 doi 10 1145 2047196 2047270 ISBN 9781450307161 S2CID 3345516 Moeslund Thomas B Granum Erik 1 March 2001 A Survey of Computer Vision Based Human Motion Capture Computer Vision and Image Understanding 81 3 231 268 CiteSeerX 10 1 1 108 203 doi 10 1006 cviu 2000 0897 Wand Michael Adams Bart Ovsjanikov Maksim Berner Alexander Bokeloh Martin Jenke Philipp Guibas Leonidas Seidel Hans Peter Schilling Andreas April 2009 Efficient reconstruction of nonrigid shape and motion from real time 3D scanner data ACM Transactions on Graphics 28 2 1 15 CiteSeerX 10 1 1 230 1675 doi 10 1145 1516522 1516526 S2CID 9881027 Biswas K K Basu Saurav Kumar 2011 Gesture recognition using Microsoft Kinect The 5th International Conference on Automation Robotics and Applications pp 100 103 doi 10 1109 ICARA 2011 6144864 ISBN 978 1 4577 0330 0 S2CID 8464855 Kim Pileun Chen Jingdao Cho Yong K May 2018 SLAM driven robotic mapping and registration of 3D point clouds Automation in Construction 89 38 48 doi 10 1016 j autcon 2018 01 009 Scott Clare 2018 04 19 3D Scanning and 3D Printing Allow for Production of Lifelike Facial Prosthetics 3DPrint com O Neal Bridget 2015 02 19 CyArk 500 Challenge Gains Momentum in Preserving Cultural Heritage with Artec 3D Scanning Technology 3DPrint com Fausto Bernardini Holly E Rushmeier 2002 The 3D Model Acquisition Pipeline Computer Graphics Forum 21 2 149 172 CiteSeerX 10 1 1 94 7486 doi 10 1111 1467 8659 00574 S2CID 15779281 Matter and Form 3D Scanning Hardware amp Software matterandform net Retrieved 2020 04 01 OR3D What is 3D Scanning Scanning Basics and Devices OR3D Retrieved 2020 04 01 a href Template Cite web html title Template Cite web cite web a CS1 maint numeric names authors list link 3D scanning technologies what is 3D scanning and how does it work Aniwaa Retrieved 2020 04 01 what is 3d scanning laserdesign com Hammoudi Karim 2011 Contributions to the 3D city modeling 3D polyhedral building model reconstruction from aerial images and 3D facade modeling from terrestrial 3D point cloud and images Thesis CiteSeerX 10 1 1 472 8586 Pinggera P Breckon T P Bischof H September 2012 On Cross Spectral Stereo Matching using Dense Gradient Features PDF Proc British Machine Vision Conference pp 526 1 526 12 doi 10 5244 C 26 103 ISBN 978 1 901725 46 9 Retrieved 8 April 2013 permanent dead link Seismic 3D data acquisition Archived from the original on 2016 03 03 Retrieved 2021 01 24 Optical and laser remote sensing Archived from the original on 2009 09 03 Retrieved 2009 09 09 Brian Curless November 2000 From range scans to 3D models ACM SIGGRAPH Computer Graphics 33 4 38 41 doi 10 1145 345370 345399 S2CID 442358 Cui Yan Schuon Sebastian Chan Derek Thrun Sebastian Theobalt Christian 2010 3D shape scanning with a time of flight camera 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition pp 1173 1180 doi 10 1109 CVPR 2010 5540082 ISBN 978 1 4244 6984 0 S2CID 2084943 Franca J G D M Gazziro M A Ide A N Saito J H 2005 A 3D scanning system based on laser triangulation and variable field of view IEEE International Conference on Image Processing 2005 pp I 425 doi 10 1109 ICIP 2005 1529778 ISBN 978 0 7803 9134 5 S2CID 17914887 Roy Mayer 1999 Scientific Canadian Invention and Innovation From Canada s National Research Council Vancouver Raincoast Books ISBN 978 1 55192 266 9 OCLC 41347212 Francois Blais Michel Picard Guy Godin 6 9 September 2004 Accurate 3D acquisition of freely moving objects 2nd International Symposium on 3D Data Processing Visualisation and Transmission 3DPVT 2004 Thessaloniki Greece Los Alamitos CA IEEE Computer Society pp 422 9 ISBN 0 7695 2223 8 Goel Salil Lohani Bharat January 2014 A Motion Correction Technique for Laser Scanning of Moving Objects IEEE Geoscience and Remote Sensing Letters 11 1 225 228 Bibcode 2014IGRSL 11 225G doi 10 1109 LGRS 2013 2253444 S2CID 20531808 Understanding Technology How Do 3D Scanners Work Virtual Technology Archived from the original on 8 December 2020 Retrieved 8 November 2020 Sirat Gabriel Psaltis Demetri 1 January 1985 Conoscopic holography PDF Optics Letters 10 1 4 6 Bibcode 1985OptL 10 4S doi 10 1364 OL 10 000004 PMID 19724327 Strobl K H Mair E Bodenmuller T Kielhofer S Sepp W Suppa M Burschka D Hirzinger G 2009 The self referenced DLR 3D modeler 2009 IEEE RSJ International Conference on Intelligent Robots and Systems pp 21 28 doi 10 1109 IROS 2009 5354708 ISBN 978 1 4244 3803 7 S2CID 3576337 Strobl Klaus H Mair Elmar Hirzinger Gerd 2011 Image based pose estimation for 3 D modeling in rapid hand held motion PDF 2011 IEEE International Conference on Robotics and Automation pp 2593 2600 doi 10 1109 ICRA 2011 5979944 ISBN 978 1 61284 386 5 S2CID 2921156 Trost D 1999 U S Patent No 5 957 915 Washington DC U S Patent and Trademark Office Morano R A Ozturk C Conn R Dubin S Zietz S Nissano J March 1998 Structured light using pseudorandom codes IEEE Transactions on Pattern Analysis and Machine Intelligence 20 3 322 327 doi 10 1109 34 667888 Huang Peisen S 1 December 2006 High resolution real time three dimensional shape measurement Optical Engineering 45 12 123601 Bibcode 2006OptEn 45l3601Z doi 10 1117 1 2402128 Liu Kai Wang Yongchang Lau Daniel L Hao Qi Hassebrook Laurence G 1 March 2010 Dual frequency pattern scheme for high speed 3 D shape measurement Optics Express 18 5 5229 5244 Bibcode 2010OExpr 18 5229L doi 10 1364 OE 18 005229 PMID 20389536 Zhang Song Van Der Weide Daniel Oliver James 26 April 2010 Superfast phase shifting method for 3 D shape measurement Optics Express 18 9 9684 9689 Bibcode 2010OExpr 18 9684Z doi 10 1364 OE 18 009684 PMID 20588818 Wang Yajun Zhang Song 14 March 2011 Superfast multifrequency phase shifting technique with optimal pulse width modulation Optics Express 19 6 5149 5155 Bibcode 2011OExpr 19 5149W doi 10 1364 OE 19 005149 PMID 21445150 Sussex Computer Vision TEACH VISION5 Archived from the original on 2008 09 20 Geodetic Systems Inc www geodetic com Retrieved 2020 03 22 What Camera Should You Use for Photogrammetry 80 lv 2019 07 15 Retrieved 2020 03 22 3D Scanning and Design Gentle Giant Studios Archived from the original on 2020 03 22 Retrieved 2020 03 22 Semi Automatic building extraction from LIDAR Data and High Resolution Image 1Automated Building Extraction and Reconstruction from LIDAR Data PDF Report p 11 Archived from the original PDF on 14 September 2020 Retrieved 9 September 2019 Terrestrial laser scanning Archived from the original on 2009 05 11 Retrieved 2009 09 09 Haala Norbert Brenner Claus Anders Karl Heinrich 1998 3D Urban GIS from Laser Altimeter and 2D Map Data PDF Institute for Photogrammetry IFP Ghent University Department of Geography Glossary of 3d technology terms 23 April 2018 W J Walecki F Szondy M M Hilali 2008 Fast in line surface topography metrology enabling stress calculation for solar cell manufacturing allowing throughput in excess of 2000 wafers per hour Meas Sci Technol 19 2 025302 doi 10 1088 0957 0233 19 2 025302 S2CID 121768537 Vexcel FotoG 3D data acquisition Archived from the original on 2006 10 18 Retrieved 2009 09 09 Vexcel GeoSynth Archived from the original on 2009 10 04 Retrieved 2009 10 31 Photosynth Archived from the original on 2017 02 05 Retrieved 2021 01 24 3D data acquisition and object reconstruction using photos 3D Object Reconstruction From Aerial Stereo Images PDF Thesis Archived from the original PDF on 2011 07 24 Retrieved 2009 09 09 Agisoft Metashape www agisoft com Retrieved 2017 03 13 RealityCapture www capturingreality com Retrieved 2017 03 13 3D data acquisition and modeling in a Topographic Information System PDF Archived from the original PDF on 2011 07 19 Retrieved 2009 09 09 Performance evaluation of a system for semi automatic building extraction using adaptable primitives PDF Archived from the original PDF on 2007 12 20 Retrieved 2009 09 09 Rottensteiner Franz 2001 Semi automatic extraction of buildings based on hybrid adjustment using 3D surface models and management of building data in a TIS Inst fur Photogrammetrie u Fernerkundung d Techn Univ Wien hdl 20 500 12708 373 ISBN 978 3 9500791 3 5 Zhang Zhengyou September 1999 Flexible camera calibration by viewing a plane from unknown orientations Proceedings of the Seventh IEEE International Conference on Computer Vision Vol 1 pp 666 673 doi 10 1109 ICCV 1999 791289 ISBN 0 7695 0164 8 S2CID 206769306 Multi spectral images for 3D building detection PDF Archived from the original PDF on 2011 07 06 Retrieved 2009 09 09 Science of tele robotic rock collection European Space Agency Retrieved 2020 01 03 Scanning rocks retrieved 2021 12 08 Larsson Soren Kjellander J A P 2006 Motion control and data capturing for laser scanning with an industrial robot Robotics and Autonomous Systems 54 6 453 460 doi 10 1016 j robot 2006 02 002 Landmark detection by a rotary laser scanner for autonomous robot navigation in sewer pipes Archived 2011 07 17 at the Wayback Machine Matthias Dorn et al Proceedings of the ICMIT 2003 the second International Conference on Mechatronics and Information Technology pp 600 604 Jecheon Korea Dec 2003 Remondino Fabio June 2011 Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning Remote Sensing 3 6 1104 1138 Bibcode 2011RemS 3 1104R doi 10 3390 rs3061104 Bewley Alex Shekhar Rajiv Leonard Sam Upcroft Ben Lever Paul 2011 Real time volume estimation of a dragline payload PDF 2011 IEEE International Conference on Robotics and Automation pp 1571 1576 doi 10 1109 ICRA 2011 5979898 ISBN 978 1 61284 386 5 S2CID 8147627 Men Hao Pochiraju Kishore 2012 Algorithms for 3D Map Segment Registration In Khosrow Pour Mehdi ed Geographic Information Systems Concepts Methodologies Tools and Applications Concepts Methodologies Tools and Applications Vol I IGI Global p 502 ISBN 978 1 4666 2039 1 Murphy Liam Case Study Old Mine Workings Subsurface Laser Scanning Case Studies Liam Murphy Archived from the original on 2012 04 18 Retrieved 11 January 2012 Forensics amp Public Safety Archived from the original on 2013 05 22 Retrieved 2012 01 11 The Future of 3D Modeling GarageFarm 2017 05 28 Retrieved 2017 05 28 Curless B amp Seitz S 2000 3D Photography Course Notes for SIGGRAPH 2000 Codigos QR y realidad aumentada la evolucion de las cartas en los restaurantes La Vanguardia in Spanish 2021 02 07 Retrieved 2021 11 23 Crime Scene Documentation Lamine Mahdjoubi Cletus Moobela Richard Laing December 2013 Providing real estate services through the integration of 3D laser scanning and building information modelling Computers in Industry 64 9 1272 doi 10 1016 j compind 2013 09 003 Matterport Surpasses 70 Million Global Visits and Celebrates Explosive Growth of 3D and Virtual Reality Spaces Market Watch Retrieved 19 December 2016 The VR Glossary 29 August 2016 Retrieved 26 April 2017 Daniel A Guttentag October 2010 Virtual reality Applications and implications for tourism Tourism Management 31 5 637 651 doi 10 1016 j tourman 2009 07 003 Gillespie Katie May 11 2018 Virtual reality translates into real history for iTech Prep students The Columbian Retrieved 2021 12 09 Cignoni Paolo Scopigno Roberto 18 June 2008 Sampled 3D models for CH applications A viable and enabling new medium or just a technological exercise Journal on Computing and Cultural Heritage 1 1 2 1 2 23 doi 10 1145 1367080 1367082 S2CID 16510261 Wyatt Spratt Simon 2022 11 04 After the Revolution A Review of 3D Modelling as a Tool for Stone Artefact Analysis Journal of Computer Applications in Archaeology 5 1 215 237 doi 10 5334 jcaa 103 hdl 2123 30230 S2CID 253353315 Magnani Matthew Douglass Matthew Schroder Whittaker Reeves Jonathan Braun David R October 2020 The Digital Revolution to Come Photogrammetry in Archaeological Practice American Antiquity 85 4 737 760 doi 10 1017 aaq 2020 59 S2CID 225390638 Scopigno R Cignoni P Pietroni N Callieri M Dellepiane M January 2017 Digital Fabrication Techniques for Cultural Heritage A Survey Fabrication Techniques for Cultural Heritage Computer Graphics Forum 36 1 6 21 doi 10 1111 cgf 12781 S2CID 26690232 Lewis M Oswald C 2019 Can an Inexpensive Phone App Compare to Other Methods when It Comes to 3D Digitization of Ship Models The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences 4210 107 111 Bibcode 2019ISPAr4210 107L doi 10 5194 isprs archives XLII 2 W10 107 2019 S2CID 146021711 ProQuest 2585423206 Submit your artefact www imaginedmuseum uk Retrieved 2021 11 23 permanent dead link Scholarship in 3D 3D scanning and printing at ASOR 2018 The Digital Orientalist 2018 12 03 Retrieved 2021 11 23 Marc Levoy Kari Pulli Brian Curless Szymon Rusinkiewicz David Koller Lucas Pereira Matt Ginzton Sean Anderson James Davis Jeremy Ginsberg Jonathan Shade Duane Fulk 2000 The Digital Michelangelo Project 3D Scanning of Large Statues PDF Proceedings of the 27th annual conference on Computer graphics and interactive techniques pp 131 144 Roberto Scopigno Susanna Bracci Falletti Franca Mauro Matteini 2004 Exploring David Diagnostic Tests and State of Conservation Gruppo Editoriale Giunti ISBN 978 88 09 03325 2 David Luebke Christopher Lutz Rui Wang Cliff Woolley 2002 Scanning Monticello Tontafeln 3D Hetitologie Portal Mainz Germany in German Retrieved 2019 06 23 Kumar S Snyder D Duncan D Cohen J Cooper J 2003 Digital preservation of ancient Cuneiform tablets using 3D scanning Fourth International Conference on 3 D Digital Imaging and Modeling 2003 3DIM 2003 Proceedings pp 326 333 doi 10 1109 IM 2003 1240266 ISBN 978 0 7695 1991 3 S2CID 676588 Mara Hubert Kromker Susanne Jakob Stefan Breuckmann Bernd 2010 GigaMesh and Gilgamesh 3D Multiscale Integral Invariant Cuneiform Character Extraction Vast International Symposium on Virtual Reality doi 10 2312 VAST VAST10 131 138 ISBN 978 3 905674 29 3 Mara Hubert 2019 06 07 HeiCuBeDa Hilprecht Heidelberg Cuneiform Benchmark Dataset for the Hilprecht Collection heiDATA institutional repository for research data of Heidelberg University doi 10 11588 data IE8CCN Mara Hubert 2019 06 07 HeiCu3Da Hilprecht Heidelberg Cuneiform 3D Database Hilprecht Collection heidICON Die Heidelberger Objekt und Multimediadatenbank doi 10 11588 heidicon hilprecht Mara Hubert Bogacz Bartosz 2019 Breaking the Code on Broken Tablets The Learning Challenge for Annotated Cuneiform Script in Normalized 2D and 3D Datasets 2019 International Conference on Document Analysis and Recognition ICDAR pp 148 153 doi 10 1109 ICDAR 2019 00032 ISBN 978 1 7281 3014 9 S2CID 211026941 Scott Cedarleaf 2010 Royal Kasubi Tombs Destroyed in Fire CyArk Blog Archived from the original on 2010 03 30 Retrieved 2010 04 22 a href Template Cite news html title Template Cite news cite news a CS1 maint numeric names authors list link Gabriele Guidi Laura Micoli Michele Russo Bernard Frischer Monica De Simone Alessandro Spinetti Luca Carosso 13 16 June 2005 3D digitisation of a large model of imperial Rome 5th international conference on 3 D digital imaging and modeling 3DIM 2005 Ottawa Ontario Canada Los Alamitos CA IEEE Computer Society pp 565 572 ISBN 0 7695 2327 7 Payne Emma Marie 2012 Imaging Techniques in Conservation PDF Journal of Conservation and Museum Studies Ubiquity Press 10 2 17 29 doi 10 5334 jcms 1021201 3D Body Scanner for Body Scanning in Medicine Field Scantech 2020 08 27 Retrieved 2023 11 15 Iwanaga Joe Terada Satoshi Kim Hee Jin Tabira Yoko Arakawa Takamitsu Watanabe Koichi Dumont Aaron S Tubbs R Shane September 2021 Easy three dimensional scanning technology for anatomy education using a free cellphone app Clinical Anatomy 34 6 910 918 doi 10 1002 ca 23753 PMID 33984162 S2CID 234497497 竹下 俊治 19 March 2021 生物の形態観察における3Dスキャンアプリの活用 Utilization of 3D scanning application for morphological observation of organisms 学校教育実践学研究 in Japanese 27 doi 10 15027 50609 Gurses Muhammet Enes Gungor Abuzer Hanalioglu Sahin Yaltirik Cumhur Kaan Postuk Hasan Cagri Berker Mustafa Ture Ugur December 2021 Qlone A Simple Method to Create 360 Degree Photogrammetry Based 3 Dimensional Model of Cadaveric Specimens Operative Neurosurgery 21 6 E488 E493 doi 10 1093 ons opab355 PMID 34662905 Christian Teutsch 2007 Model based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners PhD thesis 3D scanning technologies Retrieved 2016 09 15 Timeline of 3D Laser Scanners Implementing data to GIS map PDF Archived from the original PDF on 2003 05 06 Retrieved 2009 09 09 3D data implementation to GIS maps Zlatanova Sisi 2008 Working Group II Acquisition Position Paper Data collection and 3D reconstruction Advances in 3D Geoinformation Systems Lecture Notes in Geoinformation and Cartography pp 425 428 doi 10 1007 978 3 540 72135 2 24 ISBN 978 3 540 72134 5 Retrieved from https en wikipedia org w index php title 3D scanning amp oldid 1193799641, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.