New version of Prime 4.0 available

We are pleased to announce the new version of Prime 4.0. Since the release of version 3.0 Prime has undergone a number of significant changes and received many new features. Let's mention the most significant of them.

1. Parameters of all modules are located directly in the job text. Previously parameters of the most modules were stored in libraries based on the file system, and references to these libraries were used in the job. Changing parameters of a module in one job synchronously changed parameters in all the other jobs that used that library. Sometimes this was really convenient, but could also cause confusion. Therefore, the parameters of all the modules were moved to the job text (Internal mode). The libraries Velocity, Muting, Processing, Adjusting were adapted in the same way. The old mode with external libraries has also been preserved (External mode). A combination of both modes is possible now.

2. A cluster node monitor has been added to the JobBatch application. It displays information on CPU load, RAM, read/write speed, and use of temporary directory.

Also the ability to collect various information from job modules was added. From each module, the information about CPU load, main memory, number of module calls, module's time of work, number of input/output traces, etc. is collected.

3. The Tam application has been completely redesigned. The user sees all tasks (including those of other users) that are running and have already been completed on the project. The logs and launching parameters of all tasks are automatically saved, which allows any user the possibility to view them at any time.

4. Greatly redesigned DataManager application. New implementation of version history viewer.

5. The possibility to connect additional file systems to the project as external storages has been added (Supplemental storages). It is used when there is not enough volume in the project directory. All reading/writing modules (as well as some plugins) have been adapted to work with external storages.

6. The continuation mode for jobs with WriteMetacube module at the end of the job has been implemented. The possibility to use several ReadMetacube modules inside one job was added. Previously it was impossible in combination with WriteMetacube module.

7. Important changes in Migration 3D Cluster plugin:

a) The execution of the plugin has been significantly accelerated (up to 50% according to internal testing).

b) The possibility to use a metacube as input data was added (no need to make a cube of gathers which takes extra disk space).

c) The possibility to create 3D directional common image gathers was added.

8. Aperture horizon type for input data in all the inverse kinematic solution plugins has been changed to MVA type. The new type allows to use horizons with residual kinematics description that intersect the main horizons in the migration model.

9. The functionality for solving the inverse kinematic problem by the R method from the KinematicInversion3D plug-in was transferred to the Layer Inversion Cluster plug-in, which gave the possibility to parallelize it on the cluster. The old KinematicInversion3D plugin is considered obsolete.

10. The possibility of simultaneous input of heterogeneous 2D data was added. Previously, 2D data input was performed manually for each profile separately. It was necessary to create a separate dataset, create a 2D line from it, then set up the line geometry, assign regular line parameters and perform binning. And the same for each line. Now all these actions can be performed simultaneously for a lot of 2D data.

11. Adaptation of the WriteSEGY module for cluster record of the output file.

12. Previously, velocity analysis of 3D data was executed over a bundle of profiles in Vela (before migration) and Modeler (after migration) applications. Usually, a rare step was used in order to save the time. Now in Builder3d application it is possible to analyze each line in reasonable time.

13. Many new modules and several plugins have been added.

The most important ones are:

1) AbsorptionCorrection - correction of amplitude spectrum for absorption.

2) AdvPhaseRotator – the estimation of phase spectrum rotation on sliding base to increase resolution by 4th statistical moment.

3) ApplyPhaseFilter - application of phase filters calculated by AdvPhaseDecon and AdvPhaseRotator modules.

4) CAG_restore - (Common Angle Gather Restoration) transformation of common point image gathers from the offset domain to the domain of incidence/reflection angles (and back) according to the used angular scheme, as well as calculation of parameters-attributes of regular signal part description based on the extended Fatti basis (intercept, gradient, additional trigonometric functions of amplitude versus angle dependence).

5) InterpolationKinFilt - data interpolation by kinematic filtering in different slope ranges. For each range of dips, an optimal kinematic filter is selected to interpolate data within a given range and spatial basis. The interpolation result is obtained by applying the optimal filter to the raw data in the sliding spatial window. The results are then combined to best fit the input data.

6) NoiseKinFilter - noise suppression by skipping only that part of the data that corresponds to a given slope range. An optimal kinematic filter is constructed on the sliding spatial basis to pass the specified range of slopes. The source data is minimized with the filter in the sliding spatial window.

7) PhaseAdapting - frequency-dependent RMO correction. Estimates and applies local phase rotations on CIG in time sliding windows by frequency ranges based on a reference. The reference for a particular offset can be either the sum over all offsets or the local partial sum over neighboring offsets. Typically, the procedure is performed iteratively.

8) ProfilesWaveletAlignment - linkage of two profile systems. Profile systems can be heterogeneous, for example, 3D grid profiles and 2D profiles. The alignment is performed either by a common operator, or by a field of interpolated operators, adjusted by mutual correlations at the intersection points of the profiles. The filter operator may have zero-phase, minimal-phase and mixed-phase characteristics.

9) Python - implementation of user algorithms for data processing in Python language.

10) RMOCompute - computation of residual kinematic shifts on gathers after NMO or migration.

11) RMOFitting – approximation of shifts calculated by RMOCompute module by polynomial of a given degree. It is used to filter residual shifts, and also to get input data for solving inverse kinematic problem.

12) WaveletExtraction - evaluation of the null-phase waveform by ACF around the specified horizon.

13) Compress Normals Cube - compression of total cubes from the values of inclination angles from vertical of local seismic events in OXZ and OYZ planes into total compressed form. The resulting cube is used in the Migration 3D Cluster plugin to generate diffraction images and in the BI-WFT 3D Cluster plugin in either seismic field modeling mode or in demigration mode.

14) SAI3D-ERC Cluster - (Seismic Amplitudes Inversion3D) is an implementation of robust deconvolution with the possibility to extend the seismic record spectrum. The cube deconvolution is performed synchronously with the computation of the kernel of the sparsity functional, which can be represented as an additional seismic attribute. The robust deconvolution kernel can be computed for one implementation of a cube (for example, using a hard coherent filtering procedure) and be applied to another version of the cube. Also, the calculated kernel of the robust deconvolution functional can be applied to common-point image gathers, which in addition to the deconvolution effect gives the effect of additional alignment of the in-phase axes on the CIG.

15) Structural-Conform Smoothing Cube Cluster - coherent filtering based on temporal/depth models. Lateral energy balancing based on temporal/depth models. Converting the geometry of one cube to the geometry of another cube (cluster realization).

And we also released a new series of tutorial videos on working in Prime for 3D terrestrial data processing. The new series will appear on our Youtube channel.

Prime Cloud

The Prime 4.0 Cloud version is adapted to work on the Yandex cloud. Thanks to the protected channels and isolation of virtual resources the full data privacy is provided.

When running jobs on the cloud, you can choose any cluster configuration - specify the number of nodes, the number of cores, the size of RAM and disk memory.

Seismic data, unnecessary at the moment, can be transferred to the data storage of Yandex and back in a short time (1 TB of data is transferred in 7 minutes). It saves the cost of storing of big data on the main file system.

Approximate and actual cost of the job is displayed in JobBatch and Tam applications.



Prime Cloud