In the framework of the International geological and geophysical conference "GeoEurasia-2018” we’ll be glad to invite you at the following report:
Patrikeev Pavel, Senior Software Developer of Yandex Terra,
"Visualization, analysis, indexing and sorting of extremely large volumes of seismic data".
Seismic data processing of large 3D projects is associated with a sharp increase of addition charges for data sorting and preparation for different algorithms with the increase of the error cost at any stage of processing.
The usage of a specially created set of metadata let us prepare seismic data effectively for their processing by any algorithm that involves cutting off the unnecessary data using certain conditions, splitting the data into independently processed portions and the determination both the order of the portions and data within each portion.
Thus, if the algorithm involves the repeated usage of the seismic data, duplication of the data themselves is not happening, because it works with the links.
In the process of metadata creation, we can identify and calculate the features, the visualization of which will allow you to verify input data before starting the expensive processing procedures.
When you use a parallel (cluster) algorithms it is possible to use efficient parallel records in different files which will be joined by a set of metadata, created for them.