PyGSLIB API autogenerated documentation!¶
Contents:
PyGSLIB.gslib¶
PyGSLIB.gslib, Module to interface GSLIB programs. This includes the standard programs available at http://www.statios.com/software/gslib77_ls.tar.gz and non-standard GSLIB programs provided by contributors.
Note
The original Fortran 77 code was modified. That was required to convert
the existing standalone executables into shared libraries. The original
code was also modified to introduce extra functionality, for example,
maximum number of samples per drillhole in the program kt3D
and
variogram cloud in the program gamv
. The main changes consisted of:
- the old Fortran 77 as converted to Fortran 90.
- data and parameters are directly transferred between Fortran and Python functions and not through data files and parameter files.
Copyright 2016, Adrian Martinez Vargas
This software may be modified and distributed under the terms of the MIT license. See the LICENSE.txt file for details.
-
pygslib.gslib.
addcoord
(nx, ny, nz, xmn, ymn, zmn, xsiz, ysiz, zsiz, grid)¶ Insert X, Y, Z coordinates into a grid file
Parameters: - nx,ny,nz (int) – number of rows.
- xmn,ymn,zmn (double) – origin of coordinates (block centroid).
- xsiz,ysiz,zsiz (double) – block sizes.
- grid (DataFrame) – Pandas dataframe with the input grid variables.
Returns: data – Pandas dataframe (the grid) with extra columns x, y, z.
Return type: Pandas DataFrame
Note
The input grid is expected to have xmn*ymn*zmn rows with GEOEAS grid order
-
pygslib.gslib.
block_covariance
(parameters)¶ Calculates block covariance
This function calculates the block covariance given a variogram model. The anisotropy can be different in each nested structure. The block size is defined by input discretization points. Note that discretization points can have any arbitrary locations, for example
- regular discretization
- random discretization
- or discretization in an irregular polygon
Parameters: parameters (dict) – dictionary with calculation parameters Parameters are E-Type in a dictionary as follows:
parameters = { # Coordinates of the discretization points 'xdb' : [0, 0, 1, 1], # array('f') 'ydb' : [1, 0, 1, 0], # array('f') 'zdb' : [0, 0, 0, 0], # array('f') # Variogram model 'it' : [3, 2], # structure type, array('i') with bounds (nst) 'c0' : [0.1], # nugget, array('f') 'cc' : [0.4, 0.5], # variance, array('f') with bounds (nst) 'aa' : [8, 16], # ranges in main direction, array('f') with bounds (nst) 'aa1' : [8, 16], # ranges in second direction, array('f') with bounds (nst) 'aa2' : [8, 16], # ranges in third direction, array('f') with bounds (nst) 'ang1' : [0, 0], # input rank-1 array('d') with bounds (nst) 'ang2' : [0, 0], # input rank-1 array('d') with bounds (nst) 'ang3' : [0, 0]} # input rank-1 array('d') with bounds (nst)
Returns: cbb – block covariance. Return type: float Example
>>> print (pygslib.gslib.block_covariance(parameters_blk)) 0.803182760643
-
pygslib.gslib.
cdfplt
(parameters, fig=None, ax=None, title='Bin probability', label='Data', grid=True, loc=2, color='b', xlog=True, ylog=True, style='+')¶ Plot cdf. It uses declustering weight.
Parameters: parameters (dict) – dictionary with calculation parameters The dictionary with parameters may be as follows:
cdfplt_parameters = { # CDF limits and definition. 'iwt' : , # input boolean (Optional: set True). Use weight variable? 'va' : , # input rank-1 array('d') with bounds (nd). Variable 'wt' : } # input rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight.
Returns: - dict (a dictionary with some data parameters)
- {‘binval’ (, # rank-1 array(‘d’) with bounds (ncl). Class frequency.) – ‘cl’ : , # rank-1 array(‘d’) with bounds (ncl). Class. ‘xpt025’ : , # float. Percentile 0.025 ‘xlqt’ : , # float. Percentile 0.25 ‘xmed’ : , # float. Mediam (Percentile 0.50) ‘xuqt’ : , # float. Percentile 0.75 ‘xpt975’ : , # float. Percentile 0.975 ‘xmin’ : , # float. Minimum value. ‘xmax’ : , # float. Maximum value. ‘xcvr’ : , # float. Coeficient of variation. ‘xmen’ : , # float. Mean ‘xvar’ : } # float. Class width (thmax-thmin)/real(ncl), only usefull if using arithmetic scale.
- fig (a matplotlib figure)
Example
>>>
-
pygslib.gslib.
check_gamv_par
(parameters)¶ Check the variogram calculation parameters
Verifies the parameters of the
gamv
function and checks for possible errors or inconsistenciesParameters: parameters (dict) – dictionary with calculation parameters The dictionary with parameters may be as follows:
parameters = { # Coordinates of the data points 'x' : X, # array('f') with bounds (nd), nd is number of data points 'y' : Y, # array('f') with bounds (nd) 'z' : Z, # array('f') with bounds (nd) 'bhid' : bhid, # drillhole id for downhole variogram, array('i') with bounds (nd) 'vr' : VR, # Variables, array('f') with bounds (nd,nv), nv is number of variables 'tmin' : -1.0e21, # trimming limits, float 'tmax' : 1.0e21, # trimming limits, float 'nlag' : 10, # number of lags, int 'xlag' : 4, # lag separation distance, float 'xltol' : 2, # lag tolerance, float 'azm' : [0,0,90], # azimuth, array('f') with bounds (ndir) 'atol' : [90,22.5,22.5], # azimuth tolerance, array('f') with bounds (ndir) 'bandwh' : [50,10,10], # bandwidth h, array('f') with bounds (ndir) 'dip' : [0,0,0], # dip, array('f') with bounds (ndir) 'dtol' : [10,10,10], # dip tolerance, array('f') with bounds (ndir) 'bandwd' : [10,10,10], # bandwidth d, array('f') with bounds (ndir) 'isill' : 0, # standardize sills? (0=no, 1=yes), int 'sills' : [100], # variance used to std the sills, array('f') with bounds (nv) 'ivtail' : [1,1,1,1,1,1,1], # tail var., array('i') with bounds (nvarg), nvarg is number of variograms 'ivhead' : [1,1,1,1,1,1,1], # head var., array('i') with bounds (nvarg) 'ivtype' : [1,3,4,5,6,7,8], # variogram type, array('i') with bounds (nvarg) 'maxclp' : 50000} # maximum number of variogram point cloud to use, input int
Returns: out – return True (or 1) if the parameter dictionary is valid Return type: bool Example
>>> assert pygslib.gslib.check_gamv_par(parameters)==1 , 'sorry this parameter file is wrong' >>>
-
pygslib.gslib.
cova3
(parameters)¶ Calculates variogram values given a variogram function and direction
Parameters: parameters (dict) – dictionary with calculation parameters The dictionary with parameters may be as follows:
parameters = { # Coordinates of point 1 and 2 'x1' : 0, # X coordinates, point 1, float 'y1' : 0, # Y coordinates, point 1, float 'z1' : 0, # Z coordinates, point 1, float 'x2' : 1, # X coordinates, point 2, float 'y2' : 0, # Y coordinates, point 2, float 'z2' : 0, # Z coordinates, point 2, float # Variogram model 'nst' : 2, # number of nested structures 'it' : [3, 3], # structure type, array('i') with bounds (ivarg) 'c0' : [0.1], # nugget, array('f') with bounds (ivarg) 'cc' : [1, 1.4], # variance, array('f') with bounds (nvarg*nst[0]) 'aa' : [8, 22], # parameter a (or range), array('f') with bounds (nvarg*nst[0]) 'irot' : 1, # index of the rotation matrix for the first nested structure # the second nested structure will use irot+1, the third irot+2, and so on 'rotmat': rotmat} # rotation matrices (output of the function setrot)
Returns: cmax,cova – cmax is the maximum covariance value (required to invert the variogram into covariance function) cova is the actual covariance value. Note: The variogram can be calculated as v= cmax - cova
Return type: float64, float64 Note
rotmat may be obtained with setrot
Todo
calculate irot and rotmat internally and remove it from parameter dict.
Example
>>> # this is the covariance between the points x1, x2 >>> cmax,cova=pygslib.gslib.cova3(parameters_mod) >>> print (cmax, cova) >>> 2.40999984741 2.24302244186
-
pygslib.gslib.
declus
(parameters)¶ Decluster data and run declustering test with different declustering cell sizes
Parameters: parameters (dict) – dictionary with calculation parameters The dictionary with parameters may be as follows:
parameters = { # database 'x' : mydata.x, # data x coordinates, array('f') with bounds (na), na is number of data points 'y' : mydata.y, # data y coordinates, array('f') with bounds (na) 'z' : mydata.z, # data z coordinates, array('f') with bounds (na) 'vr' : mydata.vr, # variable, array('f') with bounds (na) # cell size 'anisy' : 5., # Y cell anisotropy (Ysize=size*Yanis), 'f' 'anisz' : 3., # Z cell anisotropy (Zsize=size*Zanis), 'f' 'minmax' : 1, # 0=look for minimum declustered mean (1=max), 'i' 'ncell' : 10, # number of cell sizes, 'i' 'cmin' : 5, # minimum cell sizes, 'i' 'cmax' : 50, # maximum cell sizes, 'i'. Will be update to cmin if ncell == 1 'noff' : 8, # number of origin offsets, 'i'. This is to avoid local minima/maxima 'maxcel' : 100000} # maximum number of cells, 'i'. This is to avoid large calculations, if MAXCEL<1 this check will be ignored
Returns: - wtopt (rank-1 array(‘d’) with bounds (nd), weight value)
- vrop (float, declustered mean)
- wtmin (float, weight minimum)
- wtmax (float, weight maximum)
- error (int, runtime error:) – error = 10 ! ncellt > MAXCEL ‘ check for outliers - increase cmin and/or MAXCEL’ error = 1 ! allocation error error = 2 ! deallocation error
- xinc (float, cell size increment)
- yinc (float, cell size increment)
- zinc (float, cell size increment)
- rxcs (rank-1 array(‘d’) with bounds (ncell + 1), xcell size)
- rycs (rank-1 array(‘d’) with bounds (ncell + 1), ycell size)
- rzcs (rank-1 array(‘d’) with bounds (ncell + 1), zcell size)
- rvrcr (rank-1 array(‘d’) with bounds (ncell + 1), declustering mean)
Note
Minimun and maximum valid data values are not tested. Filter out
vr
values before using this function.Todo
Create nice output for errors
-
pygslib.gslib.
dm2csv_ep
(fname, outfile, format='F15.4')¶ Convert extended precision datamine file to *.CSV
This is an experimental function an may work only if the datamine file is in extended precision format.
Parameters: - fname (str) – datamine file name/path (absolute or relative)
- outfile (str) – csv file name/path (absolute or relative)
- format (str dafault ('F15.4')) – string format for numeric values
Notes
All numeric outputs are formatted with the string format. This is a limitation and may create non-readable ascii outputs. Non-readable outputs are automatically replaced with the string ‘****’.
Examples
>>> import pygslib as gslib >>> mydata= gslib.gslib.dm2csv_ep('points.dm', 'points.csv', format='F10.2') >>>
-
pygslib.gslib.
dm2csv_sp
(fname, outfile, format='F15.4')¶ Convert a single precision datamine file to *.CSV
This is an experimental function an may work only if the datamine file is in single precision format.
Parameters: - fname (str) – datamine file name/path (absolute or relative)
- outfile (str) – csv file name/path (absolute or relative)
- format (str dafault ('F15.4')) – string format for numeric values
Notes
All numeric outputs are formatted with the string format. This is a limitation and may create non-readable ascii outputs. Non-readable outputs are automatically replaced with the string ‘****’.
Examples
>>> import pygslib as gslib >>> mydata= gslib.gslib.dm2csv_sp('points.dm', 'points.csv', format='F10.2') >>>
-
pygslib.gslib.
gam
(parameters)¶ Calculates experimental variograms on gridded data
Parameters: parameters (dict) – dictionary with calculation parameters The dictionary with parameters may be as follows:
parameters = { 'nx' : 50, # number of rows in the gridded data 'ny' : 50, # number of columns in the gridded data 'nz' : 1, # number of levels in the gridded data 'xsiz' : 1, # size of the cell in x direction 'ysiz' : 1, # size of the cell in y direction 'zsiz' : 1, # size of the cell in z direction 'bhid' : bhid, # bhid for downhole variogram, array('i') with bounds (nd) 'vr' : VR, # Variables, array('f') with bounds (nd*nv), nv is number of variables 'tmin' : -1.0e21, # trimming limits, float 'tmax' : 1.0e21, # trimming limits, float 'nlag' : 10, # number of lags, int 'ixd' : [1,0], # direction x 'iyd' : [1,0], # direction y 'izd' : [1,0], # direction z 'isill' : 0, # standardize sills? (0=no, 1=yes), int 'sills' : [100, 200],# variance used to std the sills, array('f') with bounds (nv) 'ivtail' : [1,1,2,2], # tail var., array('i') with bounds (nvarg), nvarg is number of variograms 'ivhead' : [1,1,2,2], # head var., array('i') with bounds (nvarg) 'ivtype' : [1,3,1,3]} # variogram type, array('i') with bounds (nvarg)
Returns: - pdis (Distance of pairs falling into this lag)
- pgam (Semivariogram, covariance, correlogram,… value)
- phm (Mean of the tail data)
- ptm (Mean of the head data)
- phv (Variance of the tail data)
- ptv (Variance of the head data)
- pnump (Number of pairs)
Note
For two variables use flatten array with order F, for example:
vr=mydata[['U', 'V']].values.flatten(order='FORTRAN') parameters['vr']=vr
The output is a tuple of numpy 3D ndarrays (pdis,pgam, phm,ptm,phv, ptv,pnump) with dimensions (nvarg, ndir, nlag+2), representing the experimental variograms output.
The output is similar to the one generated by the GSLIB standalone program gam.
-
pygslib.gslib.
gamv
(parameters)¶ Calculate experimental variograms on sparse data
This function is similar to the GSLIB gamv function but with some minor differences:
- the indicator automatic transformation is not applied; you may define indicator variables externally.
- a bhid variable is always required, to avoid downhole variogram use
bhid=None
orbhid = 0
. - Z variable is required, you can use constant coordinate to reduce dimensions, for example use Z=None in the parameter file or Z = 0 in the file.
- The variogram cloud was implemented but it only works for the first variogram and first direction. It only works for some variogram types.
Parameters: parameters (dict) – dictionary with calculation parameters The dictionary with parameters may be as follows:
parameters = { 'x' : X, # X coordinates, array('f') with bounds (nd), nd is number of data points 'y' : Y, # Y coordinates, array('f') with bounds (nd) 'z' : Z, # Z coordinates, array('f') with bounds (nd) 'bhidint' : bhid, # bhid for downhole variogram, array('i') with bounds (nd) 'vr' : VR, # Variables, array('f') with bounds (nd,nv), nv is number of variables 'tmin' : -1.0e21, # trimming limits, float 'tmax' : 1.0e21, # trimming limits, float 'nlag' : 10, # number of lags, int 'xlag' : 4, # lag separation distance, float 'xltol' : 2, # lag tolerance, float 'azm' : [0,0,90], # azimuth, array('f') with bounds (ndir) 'atol' : [90,22.5,22.5], # azimuth tolerance, array('f') with bounds (ndir) 'bandwh' : [50,10,10], # bandwidth 'horizontal', array('f') with bounds (ndir) 'dip' : [0,0,0], # dip, array('f') with bounds (ndir) 'dtol' : [10,10,10], # dip tolerance, array('f') with bounds (ndir) 'bandwd' : [10,10,10], # bandwidth 'vertical', array('f') with bounds (ndir) 'isill' : 0, # standardize sills? (0=no, 1=yes), int 'sills' : [100], # variance used to std the sills, array('f') with bounds (nv) 'ivtail' : [1,1,1,1,1,1,1],# tail var., array('i') with bounds (nvarg), nvarg is number of variograms 'ivhead' : [1,1,1,1,1,1,1],# head var., array('i') with bounds (nvarg) 'ivtype' : [1,3,4,5,6,7,8],# variogram type, array('i') with bounds (nvarg) 'maxclp' : 50000} # maximum number of variogram point cloud to use, input int
Warning
bhid must be an array of integers
Returns: - pdis (Distance of pairs falling into this lag)
- pgam (Semivariogram, covariance, correlogram,… value)
- phm (Mean of the tail data)
- ptm (Mean of the head data)
- phv (Variance of the tail data)
- ptv (Variance of the head data)
- pnump (Number of pairs)
- cldi (data index of the head)
- cldj (data index of the tail)
- cldg (Semivariogram, covariance, … value)
- cldh (Distance of each pair)
Note
The output variables with prefix cld are for variogram cloud and with prefix p are for directional variograms
The variables with prefix p are similar to the output generated by the GSLIB standalone program gamv.
The variogram cloud only works for the first variogram/direction and only if the variogram of type 1, 2, 6, 7 and 8.
The variogram types are:
- traditional semivariogram (1)
- traditional cross semivariogram (2)
- pairwise relative semivariogram (6)
- semivariogram of logarithms (7)
- semimadogram(8)
-
pygslib.gslib.
gamv3D
(parameters)¶ Calculates experimental variograms in all 3D directions
- Parameters
- parameters = {
- ‘x’ : input rank-1 array(‘d’) with bounds (nd) ‘y’ : input rank-1 array(‘d’) with bounds (nd) ‘z’ : input rank-1 array(‘d’) with bounds (nd) ‘bhid’ : input rank-1 array(‘i’) with bounds (nd) ‘vr’ : input rank-2 array(‘d’) with bounds (nd,nv) ‘tminv : input float ‘tmax’ : input float ‘nlag’ : input int ‘xlag’ : input float ‘ndir’ : input int ‘ndip’ : input int ‘isill’ : input int ‘sills’ : input rank-1 array(‘d’) with bounds (nv) ‘ivtail’ : input rank-1 array(‘i’) with bounds (nvarg) ‘ivhead’ : input rank-1 array(‘i’) with bounds (nvarg) ‘ivtype’ : input rank-1 array(‘i’) with bounds (nvarg)
}
np : rank-4 array(‘d’) with bounds (nlag,ndir,ndip,nvarg) dis : rank-4 array(‘d’) with bounds (nlag,ndir,ndip,nvarg) gam : rank-4 array(‘d’) with bounds (nlag,ndir,ndip,nvarg) hm : rank-4 array(‘d’) with bounds (nlag,ndir,ndip,nvarg) tm : rank-4 array(‘d’) with bounds (nlag,ndir,ndip,nvarg) hv : rank-4 array(‘d’) with bounds (nlag,ndir,ndip,nvarg) tv : rank-4 array(‘d’) with bounds (nlag,ndir,ndip,nvarg)
-
pygslib.gslib.
histgplt
(parameters, fig=None, ax=None, title='Bin probability', label='Data', alpha=0.7, grid=True, loc=2, color='b', c=0.001, edgecolor='black')¶ Plot histogram. It uses declustering weight.
Parameters: parameters (dict) – dictionary with calculation parameters The dictionary with parameters may be as follows:
histgplt_parameters = { # histogram limits and definition. 'hmin' : , # input float (Optional: set as minimum value in dataset). Minimun value in the histogram. 'hmax' : , # input float (Optional: set as maximum value in dataset). Maximum value in the histogram. 'ncl' : , # input int (Optional: set as 10). Number of bins/clases. 'iwt' : , # input boolean (Optional: set True). Use weight variable? 'ilog' : , # input boolean (Optional: set False). If true uses log scale, otherwise uses arithmetic 'icum' : , # input boolean (Optional: set False). If true uses cumulative histogram, otherwise plots frequency histograms 'va' : , # input rank-1 array('d') with bounds (nd). Variable 'wt' : } # input rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight.
Returns: - dict (a dictionary with some data parameters)
- {‘binval’ (, # rank-1 array(‘d’) with bounds (ncl). Class frequency.) – ‘nincls’ : , # rank-1 array(‘d’) with bounds (ncl). Count of values in the class (not using declustering weight). ‘cl’ : , # rank-1 array(‘d’) with bounds (ncl). Class. ‘clwidth’: , # rank-1 array(‘d’) with bounds (ncl). Class width ‘xpt025’ : , # float. Percentile 0.025 ‘xlqt’ : , # float. Percentile 0.25 ‘xmed’ : , # float. Mediam (Percentile 0.50) ‘xuqt’ : , # float. Percentile 0.75 ‘xpt975’ : , # float. Percentile 0.975 ‘xmin’ : , # float. Minimum value. ‘xmax’ : , # float. Maximum value. ‘xcvr’ : , # float. Coeficient of variation. ‘xmen’ : , # float. Mean ‘xvar’ : , # float. Variance ‘xfrmx’ : , # float. Maximum Class Frequency ‘dcl’ : } # float. Class width (thmax-thmin)/real(ncl), only usefull if using arithmetic scale.
- fig (a matplotlib figure)
Example
>>>
-
pygslib.gslib.
kt3d
(parameters)¶ Estimates with the GSLIB program KT3D
This is a wrap for the GSLIB Fortran code of the program KT3D Version 2.0, originally in Fortran 77. Only minor changes were included, the most relevant are:
- support for maximum number of samples per drillhole was implemented
- the direct file output was redirected to numpy arrays
- the input (for grid estimate) is now as numpy arrays. The grid definition was retained because is an input of the GSLIB search super-block algorithm.
- the trimming limits were removed; you may filter out undesired values before estimating
Parameters: parameters (dict) – dictionary with calculation parameters The dictionary with parameters may be as follows:
kt3d_parameters = { # Input Data # ---------- 'x' : , # rank-1 array('f') with bounds (nd) 'y' : , # (optional) rank-1 array('f') with bounds (nd) 'z' : , # (optional) rank-1 array('f') with bounds (nd) 'vr' : , # (optional) rank-1 array('f') with bounds (nd) 've' : , # (optional) rank-1 array('f') with bounds (nd) 'bhid': , # (optional) rank-1 array('i') with bounds (nd) # Output (Target) # ---------- 'nx' : , # int 'ny' : , # int 'nz' : , # int 'xmn' : , # float 'ymn' : , # float 'zmn' : , # float 'xsiz' : , # float 'ysiz' : , # float 'zsiz' : , # float 'nxdis' : , # int 'nydis' : , # int 'nzdis' : , # int 'outx' : , # rank-1 array('f') with bounds (nout) 'outy' : , # (optional) rank-1 array('f') with bounds (nout) 'outz' : , # (optional) rank-1 array('f') with bounds (nout) 'outextve' : , # (optional) rank-1 array('f') with bounds (nout) # Search parameters # ---------- 'radius' : , # float 'radius1' : , # float 'radius2' : , # float 'ndmax' : , # int 'ndmin' : , # int 'noct' : , # (optional) int 'nbhid' : , # (optional) int 'sang1' : , # (optional) float 'sang2' : , # (optional) float 'sang3' : , # (optional) float # Kriging parameters and options # ---------- 'idrif' : , # (optional) rank-1 array('i') with bounds (9) 'itrend' : , # (optional) int 'ktype' : , # (optional) int 'skmean' : , # (optional) float 'koption' : , # (optional) int 'iktype' : , # (optional) int 'cut' : , # (optional) rank-1 array('f') with bounds (ncut) 'idbg' : , # (optional) int # Inverse of the power of the distance parameter 'id2power' : , # (optional, default 2) float # Variogram parameters # ---------- 'c0' : , # float 'it' : , # rank-1 array('i') with bounds (nst) 'cc' : , # rank-1 array('f') with bounds (nst) 'aa' : , # rank-1 array('f') with bounds (nst) 'aa1' : , # rank-1 array('f') with bounds (nst) 'aa2' : , # rank-1 array('f') with bounds (nst) 'ang1' : , # (optional) rank-1 array('f') with bounds (nst) 'ang2' : , # (optional) rank-1 array('f') with bounds (nst) 'ang3' : } # (optional) rank-1 array('f') with bounds (nst)
The optional input will be internally initialized to zero or to array of zeros unless specified.
Returns: - output (dict, estimation results at target points/blocks)
- debug (debug output for the last block estimated)
- estimate (estimation summary)
The estimation results
output
will be as follows:If iktype=0 returns variable estimate as:
{'outest' : , # kriging estimate 'outkvar' : , # kriging variance 'outidpower' : , # inverse of the power of the distance estimate 'outnn' : } # nearest neightbor estimate all these are rank-1 array('f') with bounds (nout)
If ktype=1 returns median indicator kriging estimate as:
{'outcdf' : } #rank-2 array('f') with bounds (nout,ncut)
The debug output
debug
will be only for the last block estimated but calculated for all the blocks. Make sure to setidbg
to zero for estimation in large models.If idbg = 0
debug
will be an empty dictionary. If debug > 0debug
will be as follows:{'cbb' : , #float 'neq' : , #int 'na' : , #int 'dbgxdat' : , #rank-1 array('f') with bounds (ndmax) 'dbgydat' : , #rank-1 array('f') with bounds (ndmax) 'dbgzdat' : , #rank-1 array('f') with bounds (ndmax) 'dbgvrdat' : , #rank-1 array('f') with bounds (ndmax) 'dbgwt' : , #rank-1 array('f') with bounds (ndmax + 11) 'dbgxtg' : , #float 'dbgytg' : , #float 'dbgztg' : , #float 'dbgkvector' : , #rank-1 array('f') with bounds (ndmax + 11) 'dbgkmatrix' : , 'ellipsoid' : } #vtkpolydata object with ellipsoid
The output
estimate
is to provide an estimation summary but this is not implemented yet.Note
Not all the parameters are used for calculation, it depends on the kriging type and options
Optional parameters can be removed, knowing that kt3D will create internal arrays/variable initialized to zero value
If using nbhid > 0 the hole id number (bhid) is required. Hole IDs may be integers from one to total number of drillholes. Use function pygslib.Drillhole.txt2intID(table_name) to get a correct bhid number.
Some important stats and run progress are only available in the stdout (the terminal) and will not be available in Ipython Notebooks.
-
pygslib.gslib.
postik
(parameters)¶ Postprocess indicator kriging output (cdf)
Parameters: parameters (dict) – dictionary with calculation parameters The dictionary with parameters may be as follows:
postik_parameters = { # output option, output parameter 'iout' : , # int. 1 E-type,2 P and means outpar,3 p-quantile for outpar=p, and 4 conditional variance 'outpar' : , # float. Parameter for iout # the thresholds 'ccut1' : , # 1D array of floats. Cutoff used in MIK # volume support?, type, varred 'ivol' : , # int. If 1 the support correction is applied 'ivtyp' : , # int. 1 for affine correction and indirect lognormal correction 'varred' : , # float. Volumen correction, ussually r~ Block variance/Point variance # minimum and maximum Z value 'zmin' : , # float. Minimum value in local CDF 'zmax' : , # float. Maximum value in local CDF # lower,middle and upper tail: option, parameter 'ltail' : , # int. Lower tail interpolation function, 1 linear, 2 power model, 3 tabulated quantiles 'ltpar' : , # float. Lower tail function parameter 'middle' : , # int. Middle CDF interpolation function 1 linear, 2 power model, 3 tabulated quantiles 'mpar' : , # float. Middle CDF segment function parameter 'utail' : , # int. Upper tail interpolation function, 1 linear, 2 power model, 3 tabulated quantiles, 4 hyperbolic 'utpar' : , # float. Uper tail function parameter # maximum discretization 'maxdis' : , # int. Discretization of the local CDF. # 1D arrays with global distribution 'vr' : , # 1D array of floats for table look-up if tabulated quantiles are used as interpolation function 'wt' : , # 1D array of floats with wights on table look-up # 2D array with IK3D output (continuous) 'p' : } # 2D array of floats. This is the MIK output or any array of local CDFs
Returns: - out1 (rank-1 array(‘f’) with bounds (na))
- out2 (rank-1 array(‘f’) with bounds (na))
- out3 (rank-1 array(‘f’) with bounds (na))
If
iout == 1
(E-Type estimate)out1
will be the E-Type estimate.out2
andout3
will be NaNIf
iout == 2
(prob > cutoff)out1
will be the probability above the cutoff,out2
the mean above the cutoff andout3
the mean below the cutoffIf
iout == 3
(p-quantile estimate)out1
will be the Z value corresponding to the CDF == p.out2
andout3
will be NaNIf
iout == 4
(conditional variance)out1
will be the conditional variance.out2
andout3
will be NaNExample
>>> out1,out2,out3,error = postik(parameters)
Todo
Fix bug. The algorithm fails if there are nans in probabilities.
-
pygslib.gslib.
read_gslib_file
(fname, maxnvar=500)¶ Reads a gslib file
The native Fortran code is used to read GEOEAS (gslib) files
Parameters: - fname (str) – file name path (absolute or relative)
- maxnvar (Optional[int]) – maximum number of variables in file
Returns: data – The file is returned in memory as a Pandas DataFrame
Return type: Pandas DataFrame
Examples
>>> import pygslib as gslib >>> mydata= gslib.gslib.read_gslib_file('dataset/cluster.dat') >>> print (mydata.head(n=5)) Xlocation Ylocation Primary Secondary Declustering Weight 0 39.5 18.5 0.06 0.22 1.619 1 5.5 1.5 0.06 0.27 1.619 2 38.5 5.5 0.08 0.40 1.416 3 20.5 1.5 0.09 0.39 1.821 4 27.5 14.5 0.09 0.24 1.349
-
pygslib.gslib.
rotcoord
(ang1=0, ang2=0, ang3=0, anis1=1, anis2=1, ind=1, invert=0)¶ Generates rotation matrix
This is the rotation matrix used internally in standard GSLIB programs
Parameters: - ang1 (input float, Azimuth angle (rotation along Z)) –
- ang2 (input float, Dip angle (downdip positive) (rotation along X)) –
- ang3 (input float, Plunge angle (rotation along Z)) –
- anis1 (input float, First anisotropy ratio (semi mayor direction/mayor direction)) –
- anis2 (input float, Second anisotropy ratio (minor direction/mayor direction)) –
- ind (input int, index) –
- maxrot (input int, number of matrices (for example use 3 for a ZXZ, rotation)) –
Returns: rotmat
Return type: rank-3 array(‘d’) with bounds (maxrot,3,3)
-
pygslib.gslib.
rotscale
(parameters)¶ Rotates and rescales a set of 3D coordinates
The new rotated and rescaled system of coordinates will have origin at [x = 0, y = 0, z = 0]. This point corresponds to [x0,y0,z0] (the pivot point) in the original system of coordinates.
Parameters: parameters (dict) – dictionary with calculation parameters Parameters are pased in a dictionary as follows:
parameters = { 'x' : mydata.x,# data x coordinates, array('f') with bounds (na), na is number of data points 'y' : mydata.y,# data y coordinates, array('f') with bounds (na) 'z' : mydata.z,# data z coordinates, array('f') with bounds (na) 'x0' : 0, # pivot point coordinate X, 'f' 'y0' : 0, # pivot point coordinate Y, 'f' 'z0' : 0, # pivot point coordinate Z, 'f' 'ang1' : 45., # Z Rotation angle, 'f' 'ang2' : 0., # X Rotation angle, 'f' 'ang3' : 0., # Y Rotation angle, 'f' 'anis1' : 1., # Y cell anisotropy, 'f' 'anis2' : 1., # Z cell anisotropy, 'f' 'invert' : 0} # 0 do rotation, <> 0 invert rotation, 'i'
Returns: - xr (rank-1 array(‘d’) with bounds (nd), new X coordinate)
- yr (rank-1 array(‘d’) with bounds (nd), new X coordinate)
- zr (rank-1 array(‘d’) with bounds (nd), new X coordinate)
Note
This is nonstandard gslib function and is based on the paper:
The rotation is {Z counter clockwise ; X clockwise; Y counter clockwise} [-ZX-Y]
-
pygslib.gslib.
set_nan_value
(value=nan)¶ Set non-estimated value in the modules KT3D and POSTIK
This will change the value assigned to non-estimated values
Parameters: value (numeric (optional, default np.nan)) – Note
This will change the behavior of KT3D in the actual python section.
to see the actual non-estimated value you may call
__gslib__kt3d.UNEST
-
pygslib.gslib.
setrot
(ang1=0, ang2=0, ang3=0, anis1=1, anis2=1, ind=1, maxrot=1)¶ Updates a rotation and scaling matrix given a set of angles
This is necessary for some interval GSLIB function (ex. cova3)
Warning
The rotations are peculiar, do not use this to rotate data.
Parameters: - ang1 (input float, Azimuth angle (rotation along Z)) –
- ang2 (input float, Dip angle (downdip positive) (rotation along X)) –
- ang3 (input float, Plunge angle (rotation along Z)) –
- anis1 (input float, First anisotropy ratio (semi mayor direction/mayor direction)) –
- anis2 (input float, Second anisotropy ratio (minor direction/mayor direction)) –
- ind (input int, index) –
- maxrot (input int, number of matrices (for example use 3 for a ZXZ, rotation)) –
Returns: rotmat
Return type: rank-3 array(‘d’) with bounds (maxrot,3,3)
PyGSLIB.drillhole¶
PyGSLIB Drillhole, Module to handle drillhole data, desurvey interval tables and other drillhole relate processes.
Copyright (C) 2015 Adrian Martinez Vargas
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>
-
class
pygslib.drillhole.
Drillhole
¶ Drillhole object with functions to desurvey and validate drillholes.
Parameters: - collar (Pandas DataFrame) –
- survey (Pandas DataFrame) –
Example
>>> >>> mydrillhole = pygslib.drillhole.Drillhole(collar, survey) >>> mydrillhole.addtable(assay, 'assay' ,overwrite = False) >>>
The collar table may contain the fields:
- BHID with any dtype
- XCOLLAR, YCOLLAR, ZCOLLAR with dtypes float64.
- (Optional) LENGTH with dtypes float64. This is the length of drillhole and can be used in some functions, for example, to fill gaps.
The survey table may contain the fields:
- BHID with any dtype
- AT,AZ, DIP with dtypes float64.
-
collar
¶ Pandas DataFrame – The collar table
-
survey
¶ Pandas DataFrame – The survey table
-
tables
¶ [Pandas DataFrame] – list with Pandas DataFrames with interval tables (e.j. assay) containing compulsory fields: BHID with any dtype and FROM, TO with dtypes float64.
-
table_mames
¶ [str] – list of table names
Note
A new copy of the input data will be created in memory. To work with shared memory in an external DataFrame you may copy back the table:
>>> shared_collar = mydrillholeDB.collar
- To add interval tables use
addtable
- The existence of compulsory fields is only validated during object initialization and when adding interval tables
- Survey tables may have at least 2 intervals and an interval AT=0 is required for desurvey
Todo
- [] Fix example section.
-
add_gaps
(str table_name, str new_table_name, bint overwrite =False, double tol=0.01, bint endhole=False, bint clean=True)¶ Fills gaps with new FROM-TO intervals.
All the gaps, including the gaps at collar and at the end of drillholes will be filled with
NaN
intervals.A code field
_id0
will be added to new and existing tables._id0
can be used to link input table rows with the output table rows. Gaps will have_id0= -999
Parameters: - table_name (name of the table to add gaps) – it must be an existing table at drillhole.table.keys()
- new_table_name (name of the new table with gaps added) – it can be a new name, no in drillhole.table.keys() or a table name on drillhole.table.keys() if overwrite =True
- overwrite (default True.) – If the table exist and overwrite = True the existing table will be overwrite.
- tol (default 0.01.) – gaps and overlays within tol will be ignore but adjusted in order to impose FROM[i-1]==TO[i]
- endhole (default False.) – if true a gap at the end of the drillhole will be added. The end of the drillhole is calculated from drillhole.collar[‘LENGTH’], and error will be raised if: there is no ‘LENGTH’ field at collar or if there are undefined values at LENGTH. Warnings will be raised is the TO > LENGTH.
- clean (default True.) – Delete temporary columns created with suffix __tmp__.
Returns: gap,overlap
Return type: list
Example
>>> >>> gap,overlap= mydrillhole.add_gaps(table_name = 'assay', new_table_name = 'tmp', overwrite=False, tol=0.01, endhole=False, clean=True) >>>
Note
The output
gap
andoverlap
contain_id0
of the raws before the gap or the overlap was detected. Gaps and overlaps at the end of the drillhole (if endhole==True) _id0 will have value -888.Todo
- [] Fix example section.
-
addtable
(object table, str table_name, bint overwrite =False)¶ Adds a table and assigns a name.
Parameters: - table (Pandas DataFrame) – containing compulsory fields BHID with any dtype and FROM, TO with dtypes float64.
- table_name (str) – with an the name of the table
- overwrite (boolean, optional, default False) – if the name of the table exists overwrite = True will overwrite the existing table with the new input table
Examples
>>> mydrillhole.addtable(assay, 'assay')
Todo
- [] Fix example section.
-
bench_composite
(str table_name, double zmax, double bench, double tol=0.01)¶ This function is not implemented yet.
-
collar2table
(str table_name, str new_table_name, list collar_prop, bint overwrite=False)¶ Add collar properties to a table.
Parameters: - table_name (name of the table) –
- new_table_name (name of the output table) –
- collar_prop (list with property names in collar) – the property names may exist at drillhole.collar.columns if collar_prop= Null all the Collar properties will be copied
- overwrite (default False) – If new_table_name exists and overwrite == True the existing table will be overwrite.
Example
>>> >>> mydrillhole.collar2table(table_name = 'assay', new_table_name = 'assay', collar_prop = ['TYPE', 'COMMENT'], overwrite = True) >>>
Todo
- [] Fix example section.
-
del_table
(str table_name)¶ Deletes a table.
Parameters: table_name (str) – the name of the table Examples
>>> mydrillhole.addtable('assay')
Todo
- [] Fix example section.
-
desurvey
(str table_name, bint endpoints=False, bint warns=True)¶ Desurvey a drillhole table.
Create coordinates and direction angles at table midpoint intervals. If
endpoints=True
it also creates coordinate fields at end point intervals. The existing coordinate fields will be overwritten.Parameters: - table_name (str) – a table name existing in drillhole object
- endpoints (boolean, optional, default False) – if True coordinates at beginning and end intervals are also created
- warns (boolean, optional, default True) – if True warns will be raised if inconsistencies are detected
- method (int, optional, default 1 (minimum curvature)) – the desurvey method: 1 is minimum curvature and 2 is tangential
Examples
>>> # sort with pandas function DataFrame.sort >>> mydrillhole.collar.sort(['BHID'], inplace=True) >>> mydrillhole.survey.sort(['BHID', 'AT'], inplace=True) >>> mydrillhole.table['assay'].sort(['BHID', 'FROM'], inplace=True) >>> # desurvey >>> mydrillhole.desurvey('assay', endpoints=True, method=1) >>>
Note
collar, survey and the input table may be sorted. If you call the function with endpoints=False end points already desurveyed may not be overwritten.
This function calls __dsmincurb() and __desurv1dh() or __dstangential() functions to calculate the desurvey value.
Both desurvey methods (tangential and minimum curvature) use angles interpolated from two desurvey points at Survey table.
Todo
- [] Fix example section.
-
downh_composite
(str table_name, str variable_name, str new_table_name, double cint = 1, double minlen=-1, bint overwrite =False)¶ Downhole composites one variable at the time
Parameters: - table_name (name of the table with drillhole intervals) – it must be an existing table at drillhole.table.keys()
- variable_name (name of the variable in "table_name" table) – it must be an existing table at drillhole.table[table_name].columns
- new_table_name (name of the new table with composites) – it can be a new name, no in drillhole.table.keys() or a table name on drillhole.table.keys() if overwrite =True
- cint (default 1, compositing interval length) – it may be greater than zero
- minlen (default -1 minimum composite length) – if < 0 then minlen will be cint/2.
- overwrite (default True.) – If the table exist and overwrite = True the existing table will be overwritten.
Example
>>> >>> mydrillhole.downh_composite(table_name = 'assay', variable_name = 'Au', new_table_name = 'cmp', cint = 1, minlen =-1, overwrite = False) >>>
Note
Undefined intervals in ‘variable_name’ will be excluded.
To composite per lithology, you can filter out each lithology composite and then combine the composite tables.
This algorithm starts from distance zero and no residual will be created at the top, for example, the if the first interval is [FROM=1.5, TO=2] it will be composited as [FROM=0, TO=2,_len=0.5] for 2 m composites. This produce consistent results if different tables are composited, making easier later table combination
The accumulation output ‘_acum’ can be used to produce accumulated variables, to identify number of intervals in each composite or to track categorical variables (coded as number) compositing.
Todo
- [] Fix example section.
-
export_core_vtk_line
(str table_name, str filename, str title = '')¶ Export desurveyed drillhole table to vtk lines. Endpoints are required.
Parameters: - table_name (str) –
- filename (str) – This is the absolute or relative file path and name.
- title (str) – The file header (or title, or comment)
See also
pygslib.vtktool()
Note
To export to VTK points see pygslib.vtktool
Examples
>>> >>> mydrillhole.export_core_vtk_line(table_name = 'assay', filename = 'assay_line.vtk') >>>
Todo
- [] Fix example section
-
fix_survey_one_interval_err
(self, double dummy_at)¶ Fixes drillholes with a single survey record.
Drillholes may have at least two survey records. This function adds a dummy survey record in drillholes with a single survey interval at a position
dummy_at
.Parameters: dummy_at (float) – this value will be used as a dummy AT value at survey Example
>>> >>> mydrillhole.fix_survey_one_interval_err(dummy_at = 900.0) >>>
Note
The dummy records added are a duplicated of the unique record available but with
AT = dummy_at
. The result will be a straight drillhole.You will not be able to desurvey if there are drillholes with only one survey record.
Todo
- [] Fix example section.
-
fix_zero_interval
(str table_name, str new_table_name, bint overwrite = False, double tol = 0.01, bint clean = True, bint endhole=False, bint addgaps=True)¶ Removes zero length intervals
The function removes zero intervals and adds gaps if the gap length is longer than the tolerance.
Parameters: - table_name (name of the table) –
- new_table_name (name of the new table) – it may not exists at drillhole.table.keys()
- overwrite (default True.) – If new_table_name exists and overwrite == True the existing table will be overwritten.
- tol (default 0.01.) – segments with length<= 0.01 will be considered as zero length and removed
- endhole (default False.) – if true a gap at the end of the drillhole will be added. The end of the drillhole is calculated from drillhole.collar[‘LENGTH’], and error will be raised if: there is no ‘LENGTH’ field at collar or if there are undefined values at LENGTH. Warnings will be raised is the TO > LENGTH.
- clean (default True.) – Delete temporary columns created with suffix __tmp__.
Example
>>> >>> gap,overlap = mydrillhole.fix_zero_interval(table_name= 'assay', new_table_name = 'tmp', overwrite = False, tol = 0.01, clean = True, endhole = False, addgaps = True) >>>
Note
This function removes zero intervals and the calls the add_gaps function.
Todo
- [] Fix example section.
-
key_composite
()¶ downh_composite(str table_name, str key_name, str variable_name, str new_table_name, double tol=0.001, bint overwrite =False)
Downhole composites one variable at the time
Parameters: - table_name (name of the table with drillhole intervals) – it must be an existing table at drillhole.table.keys()
- key_name (name of the key in "table_name" table) – it must be an existing field at drillhole.table[table_name].columns
- variable_name (name of the variable in "table_name" table) – it must be an existing field at drillhole.table[table_name].columns
- new_table_name (name of the new table with composites) – it can be a new name, no in drillhole.table.keys() or a table name on drillhole.table.keys() if overwrite =True
- tol (default 0.1, tolerance to ignore gaps) –
- overwrite (default True.) – If the table exist and overwrite = True the existing table will be overwritten.
Example
>>>
Todo
- [] Fix example section.
- [] Add accumulator and count variable
-
merge
(str table_A, str table_B, str new_table_name, bint overwrite =False, double tol=0.01, bint clean=True)¶ Combines two tables by intersecting intervals.
This function requires drillholes without gaps and overlaps. You may un add_gaps in table_A and table_B before using this function.
Parameters: - table_A (name of the first table) – it must be an existing table at drillhole.table.keys()
- table_B (name of the second table) – it must be an existing table at drillhole.table.keys()
- new_table_name (name of the new table) – it may not exists at drillhole.table.keys()
- overwrite (default True.) – If new_table_name exists and overwrite == True the existing table will be overwritten.
- tol (default 0.01.) – segments, gaps and overlays within tol will be ignore but adjusted
- clean (default True.) – Delete temporary columns created with suffix __tmp__.
Example
>>> mydrillhole.merge(table_A = 'assay', table_B = 'litho', new_table_name = 'tmp', overwrite =False, tol=0.01, clean=True) >>> >>>
Todo
- [] Fix example section.
-
split_long_intervals
(str table, str new_table, double maxlength, double splitlength, double minlength=0, bint overwrite =False, bint clean=True, long buff=10)¶ Split long sample intervals in a table.
This function split sample intervals longer than
maxlength
into intervals approximatellysplitlength
, to be precise, equal tonewlength = interval_lenght / splitlength
The interval will be not splitted if
newlength < minlength
Parameters: - table (name of the first table) – it must be an existing table at drillhole.table.keys()
- new_table (name of the new table) – it may not exists at drillhole.table.keys() if overwrite == False
- maxlength – all intervals longer than this length will be splitted if the resulting splits are longuer than minlength
- splitlength (default 0.) – reference split length. If <= 0 or not specified it will be set equal to maxlength
- minlength (default 0.) – intervals will not be splitted if the resulting split is less than this value. If <= 0 or not specified it will be set equal to splitlength/2
- overwrite (default True.) – If new_table_name exists and overwrite == True the existing table will be overwritten.
- clean (default True.) – Delete temporary columns created with suffix __tmp__.
- buff (defaul 10) – an internal and temporary table with 10*total_length/splitlength interval will be created in memory. If you have an error in memory use a larger number.
Example
>>> mydrillhole.split_long_intervals( table= "litho", new_table = "litho_split", maxlength = 5, splitlength=4.5, minlength=2., overwrite =False, clean=True) >>> >>>
Todo
- [] Fix example section.
-
txt2intID
(str table_name)¶ Creates an alternative BHID of type integer
A new
BHIDint
will be created on thetable[table_name]
and in collar.BHIDint
is just and ordered list of integers.BHIDint may be required in some functions compiles in Fortran, for example
pyslib.gslib.gamv
andpyslib.gslib.kt3d
.Parameters: table_name (str) – Examples
>>> >>> mydrillhole.collar.sort(['BHID'], inplace=True) >>> mydrillhole.table['assay'].sort(['BHID', 'FROM'], inplace=True) >>> mydrillhole.txt2intID('assay') >>>
Note
The Collar and the table may be sorted.
Todo
- [] Fix example section.
-
validate
()¶ Runs a set of basic validations on survey and collar consisting of:
- check existence of Null values
- check survey without values AT=0
- check that coordinates and direction angles dtypes are float64
- check survey without collar and collar without survey
Examples
>>> >>> mydrillhole.validate() >>>
Note
- You may run this validation before doing desurvey
- Only few validation tests are implemented
- Validation errors will raise a python error
Todo
- [] Implement check relation between table, large variations in
survey angles, missing BHID.- [] Collect all errors and return a list of errors.
- [] Collect all warnings and return a list of warnings.
- [] Fix example section.
See also
-
validate_table
(str table_name)¶ Runs a set of basic validations on interval tables consisting of:
- checks null values in table BHID
- checks null values in From/To
- checks that FROM and TO dtypes are float64
Note
- You may run this validation before doing desurvey
- Only few minimum validations are implemented for now
- No value is returned, it raises an error
Todo
- [] Implement check relation between tables, gaps, overlays,
- missing BHID, missing Survey, Interval longer that max_depth.
- [] Collect all errors and return a list of errors.
- [] Collect all warnings and return a list of warnings.
- [] Fix example section.
See also
Examples
>>> mydrillhole.validate_table('assay')
-
pygslib.drillhole.
ang2cart
(float azm, float dip)¶ Converts azimuth and dip to x, y, z.
Returns the x,y,z coordinates of a 1 unit vector with origin of coordinates at 0,0,0 and direction defined by azm (azimuth) and dip (downward positive) angles.
Parameters: azm,dip (float, in degrees) – The direction angles azimuth, with 0 or 360 pointing north and the dip angle measured from horizontal surface positive downward Returns: out – Cartesian coordinates. Return type: tuple of floats, (x, y, z)
Examples
>>> from pygslib.drillhole import ang2cart >>> ang2cart( azm = 45, dip = 75) (0.1830127239227295, 0.1830127090215683, -0.9659258127212524) >>>
See also
Note
This function is to convert direction angle into a Cartesian representation, which are easy to interpolate. To convert back x,y,z values to direction angles use the function cart2ang.
-
pygslib.drillhole.
cart2ang
(float x, float y, float z)¶ Converts x, y, z to azimuth and dip.
Returns the azimuth and dip of a 1 unit vector with origin of coordinates at p1 [0,0,0] and p2 [x, y, z].
Parameters: x,y,z (float) – Coordinates x, y, z of one unit length vector with origin of coordinates at p1 [0,0,0] Warning
If x, y or z are outside the interval [-1,1] the values will be truncated to 1. If the coordinates are outside this interval due to rounding error then result will be ok, otherwise the result will be wrong
Returns: out – The direction angles azimuth, with 0 or 360 pointing north and the dip angle measured from horizontal surface positive downward Return type: tuple of floats, (azm, dip)
Examples
>>> from pygslib.drillhole import cart2ang >>> cart2ang(x = 0.18301, y = 0.18301, z = -0.96593) (45.0, 75.00092315673828) >>>
See also
-
pygslib.drillhole.
groupcat
()¶ Regroup categories using groping codes defined in a table.
Parameters: - codes (Pandas DataFrame) – DataFrame with categories and corresponding grouped categories
- table (Pandas DataFrame) – DataFrame with categories to be grouped
- ccol (str) – column with categories in DataFrame
codes
- cgroup (str) – column with grouped categories in DataFrame
codes
- tablegrup (str, defaul('GROUP')) – column with categories DataFrame
table
Returns: - Dataframe with grouped categories
- Dict with grouping table
Todo
- [] Add example section
-
pygslib.drillhole.
interp_ang1D
(float azm1, float dip1, float azm2, float dip2, float len12, float d1)¶ Interpolates the azimuth and dip angle over a line.
Given a line with length
len12
and endpoints with direction anglesazm1, dip1, azm2, dip2
, this function returns the direction angles of a point over this line, located at a distance d1 of the point with direction anglesazm1, dip1
.Parameters: azm1,dip1,azm2,dip2,len12,d1 (float) – azm1, dip1, azm2, dip2 are direction angles azimuth, with 0 or 360 pointing north and dip angles measured from horizontal surface positive downward. All these angles are in degrees. len12 is the length between a point 1 and a point 2 and d1 is the positive distance from point 1 to a point located between point 1 and point 2. Returns: out – The direction angles azimuth, with 0 or 360 pointing north and the dip angle measured from horizontal surface positive downward Return type: tuple of floats, (azm, dip)
Example
>>> from pygslib.drillhole import interp_ang1D >>> interp_ang1D(azm1=45, dip1=75, azm2=90, dip2=20, len12=10, d1=5) (80.74163055419922, 40.84182357788086) >>>
See also
Note
The output direction angles are interpolated using average weighted by the distances
d1
andlen12-d1
. To avoid issues with angles the azimuth and dip are converted to x,y,z, then interpolated and finally converted back to azimuth,dip angles.
-
pygslib.drillhole.
txt2int
(array text)¶ Create an integer ID for cathegories in a text array
Parameters: text (array of type str) – Returns: Return type: array of integers Todo
- [] Add example section
PyGSLIB.blockmodel¶
PyGSLIB Blockmodel, Module to handle blockmodel data.
Copyright (C) 2015 Adrian Martinez Vargas
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>
-
class
pygslib.blockmodel.
Blockmodel
¶ Blockmodel object.
Parameters: - nx,ny,nz (integer) – Number of rows columns and levels
- xorg,yorg,zorg (float) – Coordinates of the lower left corner (not centroid)
- dx,dy,dz (float) – Size of the parent block
-
nx, ny, nz
int – number of rows, columns, levels
-
xorg, yorg, zorg
float – coordinates of the left lower corner of the first block
-
dx, dy, dz
float – sizes of the blocks
-
bmtable
¶ Pandas DataFrame – Table with blocks
-
gridtable
¶ Pandas Dataframe
-
block2point
(np.ndarray [double, ndim=1] x, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] z, str prop_name)¶ Assigns a block property to the points inside a block
Parameters: - x,y,z (float 1D numpy arrays) – coordinates of the points. Array shapes may be equal
- prop_name (str) – name of the property at block model
Example
>>> >>> Au_at_point= point2block( x, y, z, prop_name= 'Au') >>>
Note
Points not laying in an existing block will be returned with value numpy.nan
-
blockinsurface
(object surface, str field, double azm=0, double dip =90, int test=1, bint overwrite=False)¶ Creates blocks given a VTK surface (polydata) depending on test criteria:
Parameters: - surface (VTK polydata) – this may work with any 3D object…, no necessarily triangles
- field (str) – Name of the new field with selection results
- dip (azm,) – rotation defining the direction we will use to test the points azm 0 will point north and dip positive means downward direction (like surface drillholes)
- test (integer, default 1) –
test == 1
test inside closed surface. Here we use- vtkOBBTree::InsideOrOutside. Closed surface are required
test == 2
test ‘above’ surfacetest == 3
test ‘below’ surfacetest == 4
test ‘inside’ surface (the surface can be open)- overwrite : boolean
- overwrite flag, if true and field exist in bmtable the values will be overwritten
Example
>>> >>> fillblocks(surface, azm, dip, test, overwrite=False) >>>
Note
This function calls vtktools.pointquering for all the points existing in the block model. The function is not optimized for large model.
A new set of blocks will be created if there is not block defined. If there are blocks in the model and overwrite==True the blocks will be removed first.
-
blocks2vtkRectilinearGrid
(str path)¶ Exports blocks of a full grid to a vtkRectilinearGrid file.
Parameters: path (string) – file name and path, without extension. The file extension (*.vtr) will be added automatically. Examples
>>> >>> blocks2vtkRectilinearGrid('myfile') >>>
Note
This will only work for full grid, in other words, if all the nx*ny*nz are defined.
All the fields defined in the block model will be exported
-
blocks2vtkUnstructuredGrid
(str path, str varname = None)¶ Exports blocks of a partial grid to a vtkUnestructuredGrid cells.
Parameters: - path (string) – file name and path, without extension. The file extension (*.vtr) will be added automatically.
- varname (string) – If None export all columns, otherwise export only the column varname
Examples
>>> >>> blocks2vtkUnstructuredGrid('myfile', 'AU') >>>
Note
Require XC, YC and ZC.
Only numeric variables can be exported TODO: export all
This call pygslib.vtktools.partialgrid2vtkfile
-
blocks2vtkUnstructuredGrid_p
()¶ blocks2vtkUnstructuredGrid(str path, str varname = None)
Exports block centroids of a partial grid to a vtkUnistructuredGrid points.
Parameters: - path (string) – file name and path, without extension. The file extension (*.vtr) will be added automatically.
- varname (string) – If None export all columns, otherwise export only the column varname
Examples
>>> >>> blocks2vtkUnstructuredGrid('myfile', 'AU') >>>
Note
Require XC, YC and ZC.
Only numeric variables can be exported TODO: export all
This call pygslib.vtktools.partialgrid2vtkfile
-
calc_ijk
(bint overwrite=False)¶ Calculates the IJK field from IX, IY, IZ indices If IJK exist and overwrite=True the existing values will be overwritten.
Parameters: overwrite (Boolean, default False) – If True overwrites any existing IJK field Example
>>> >>> myblockmodel.calc_ijk() >>> myblockmodel.calc_ijk(overwrite=True) >>>
-
calc_ixyz_fromijk
(bint overwrite=False)¶ Calculates the IX, IY, IZ fields from IJK index If IX, IY, IZ exist and overwrite=True the existing values will be overwritten.
Parameters: overwrite (Boolean, default False) – Example
>>> >>> myblockmodel.calc_ixyz_fromijk() >>> myblockmodel.calc_ixyz_fromijk(overwrite=True) >>>
-
calc_ixyz_fromxyz
(bint overwrite=False)¶ Calculates the IX, IY, IZ fields from XC, YC, ZC coordinates If IX, IY, IZ exist and overwrite=True the existing values will be overwritten.
Parameters: overwrite (Boolean, default False) – Example
>>> >>> myblockmodel.calc_ixyz_fromxyz() >>> myblockmodel.calc_ixyz_fromxyz(overwrite=True) >>>
-
calc_xyz_fromixyz
(bint overwrite=False)¶ Calculates the XC, YC, ZC coordinates from IX, IY, IZ indices If XC, YC, ZC exist and overwrite=True the existing values will be overwritten.
Parameters: overwrite (Boolean, default False) – Examples
>>> >>> myblockmodel.calc_xyz_fromixyz() >>> myblockmodel.calc_xyz_fromixyz(overwrite=True) >>>
-
create_3Dgrid
(bint overwrite=False)¶ Creates a full block 3D grid/block model
Parameters: overwrite (boolean) – overwrite flag, if true the entire grid will be overwritten Example
>>> >>> mymodel.create_3Dgrid(overwrite=False) >>>
-
create_IJK
(bint overwrite=False)¶ Creates a new block model table with IJK indices.
Examples
>>> >>> create_IJK(overwrite=True) >>>
Note
A new set of blocks will be created if there is not block defined. If there are blocks in the model and overwrite==True the blocks will be removed first.
-
create_gridxy
(bint overwrite=False)¶ Creates a full block 2D grid in defined in the XY direction.
The grid definition is taken from the block model definition in the X, and Y directions.
Parameters: overwrite (boolean) – overwrite flag, if true the entire grid will be overwritten Example
>>> >>> mymodel.create_gridxy(overwrite=False) >>>
-
create_gridxz
(bint overwrite=False)¶ Creates a full block 2D grid in defined in the XZ direction.
The grid definition is taken from the block model definition in the X, and Z directions.
Parameters: overwrite (boolean) – overwrite flag, if true the entire grid will be overwritten Example
>>> >>> mymodel.create_gridxz(overwrite=False) >>>
-
create_gridyz
(bint overwrite=False)¶ Creates a full block 2D grid in defined in the YZ direction.
The grid definition is taken from the block model definition in the Y, and Z directions.
Parameters: overwrite (boolean) – overwrite flag, if true the entire grid will be overwritten Example
>>> >>> mymodel.create_gridyz(overwrite=False) >>>
-
delete_blocks
()¶ Deletes the block model table
Examples
>>> >>> myblockmodel.delete_blocks() >>> print myblockmodel.bmtable None >>>
Note
This functions makes Blockmodel.bmtable=None. The data will be preserved in any external instance of bmtable.
-
fillwireframe
(object surface, float toll=0, bint overwrite=False)¶ Creates a full block model given a VTK surface (polydata) using vtkPolyDataToImageStencil. The results consist of blocks with parameter
in
with value between 0 and 1.in = 0
represent blocks completely outside the wireframein = 1
represent blocks completely inside the wireframe1 >in > 0
represent blocks cut by the wireframeThe parameter in is calculated as the sum of corners of the block, each corner is equal to 1 if is inside the block or equal to 0 if is outside.
The parameter
in
can be used as percentage of inclusion in the block with an error proportional to the size of the block and the resolution/complexity of the wireframe.The method may fail if the wireframe is narrower than a block, e.j. larg blocks in narrow veins. In this case you can set a tolerance value 1>=toll>=0. The output may not be used as volume, only as selection of blocks within a wireframe (with
in>0
).Parameters: - surface (VTK polydata) – this may work with any 3D object…, no necessarily triangles
- tol (float, default 0.0) – number in interval [0. , 1.] you may use tol>0 if the blocks are large and the wireframe is narrower than the block size.
- overwrite (boolean) – overwrite flag, if true the entire block model will be overwritten
Example
>>> >>> mymodel.fillwireframe(surface, toll=0, overwrite=False) >>>
Note
The tolerance only works on x, y plane, not in z.
This function works well with closed surfaces
This function works well with large and complicated wireframes, for example, Leapfrog grade shells.
-
point2block
(np.ndarray [double, ndim=1] x, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] z, object prop, str prop_name, bint average = False, bint overwrite = False)¶ Assigns data in a point to a block
Parameters: - x,y,z (Float 1D numpy arrays) – coordinates of the points. Array shapes may be equal
- prop (array like) – array with property shape[0] == x.shape[0] note that prop can be an array of array, for example, an array of indicators arrays
- prop_name (str) – name of the property at block model
- average (bool) – If there are more than one point at block average the points. If true the average will be created, this may fail if prop is not numeric. If False the last point in the block will be used to assign the value.
- overwrite (bool) – If False an existing property will not be overwritten
Example
>>> >>> point2block( x, y, z, prop, prop_name= 'Au', average = False, overwrite = False) >>>
TODO: pass array of properties np.ndarray [double, ndim=2] prop and [prop_names]
Note
Points not laying in an existing block will be ignored.
-
reblock
(double dx, double dy, double dz)¶ Reblocks a block model
The model is reblocked using IJK calculated from XC, YC, ZC coordinates and averaging all points with same IJK.
Note that this is like a point to block conversion and subblock overlapping are not taken into account.
Parameters: dx,dy,dz (floats) – New block dimensions. Note that new dimentions may be greater or equal than old dimensions in all directions. Returns: - block (model instance with model reblocked)
- err (list of variables excluded (ej no numeric))
Warning
The existing model will be overwritten
-
set_block_size
()¶ set_block_size(float dx,float dy, float dz)
Set block sizes
Parameters: dx,dy,dz (float) – sizes of the blocks Examples
>>> >>> myblockmodel.set_block_size(dx=10,dy=10,dz=5) >>>
-
set_blocks
(object bmtable)¶ Assigns an external block model stored in a Pandas DataFrame table.
One or many conditions apply
- the block model has IJK field
- or/and has IX, IY and IZ fields
- or/and has XC, YC and ZC fields
Parameters: bmtable (Pandas DataFrame object) – block model with special coordinate or index fields Examples
>>> >>> bmtable=pd.DataFrame({'IJK':np.array([1,2,3,4], dtype=np.int32)}) >>> myblockmodel.set_blocks(bmtable) >>>
Note
Make sure IJK, IX,IY and IZ have dtype int32. XC,YC and ZC may have dtype float32
One of the above conditions is required. In case that two or more of the special fields mentioned above are defined the validity of the fields is not verified, for example the valid IJK representation of fields IX, IY and IZ is not tested.
-
set_origin
(float xorg, float yorg, float zorg)¶ Set the block model origin of coordinate. This is the lower left corner of the lower left block (not-the centroid).
Parameters: xorg,yorg,zorg (float) – coordinates of the left lower corner of the first block Examples
>>> >>> myblockmodel.set_origin(xorg=-5,yorg=-5, zorg=0) >>>
-
set_rcl
(int nx, int ny, int nz)¶ Set number of blocks at row, call, level
Two conditions apply
- nx,ny and nz may be greater than zero
- nx*ny*nz may be equal to the actual number of blocks
Examples
>>> >>> myblockmodel.set_rcl(nx=10,ny=10,nz=10) >>> myblockmodel.set_rcl(nx=10,ny=100,nz=1) >>>
Note
This function basically reorders the blocks along rows, columns and levels without changing the total number of blocks.
-
pygslib.blockmodel.
ijk2ind
(np.ndarray [long, ndim=1] ijk, unsigned int nx, unsigned int ny, unsigned int nz)¶ Calculates the raw, column, level indices
ix, iy, iz
from IJK.The IJK block index is a unique identifier of each block position. This is equivalent to the position of a block in a gslib grid file. All the points within a block will have the same IJK number.
From IJK you can calculate ix, iy and iz as:
iz = ijk / nx*ny iy = (ijk-iz*nx*ny)/nx ix = ijk-iz*nx*ny - iy*nx
Parameters: - ijk (1D array of integers) – arbitrary raw, level and column indices
- nx,ny,nz (integers) – number of blocks per row, column, level
Returns: ix,iy,iz – The raw, column and level indices
Return type: 1D array of integers
Example
>>> >>> ijk= np.array([0, 1, 2, 3, 4, 5, 6, 7]) >>> ix,iy,iz = ijk2ind(ijk,2,2,2) >>> print ix >>> print iy >>> print iz [0 1 0 1 0 1 0 1] [0 0 1 1 0 0 1 1] [0 0 0 0 1 1 1 1] >>>
Note
The indices ix, iy and iz start at zero
-
pygslib.blockmodel.
ind2ijk
(np.ndarray [long, ndim=1] ix, np.ndarray [long, ndim=1] iy, np.ndarray [long, ndim=1] iz, unsigned int nx, unsigned int ny, unsigned int nz)¶ Calculates the IJK block index
The IJK block index is a unique identifier of each block position. This is equivalent to the position of a block in a gslib grid file. All the points within a block will have the same IJK number.
IJK is calculated as follow:
``ijk = iz*nx*ny+ iy*nx + ix ``
Parameters: - ix,iy,iz (1D array of integers) – arbitrary raw, level and column indices
- nx,ny,nz (integers) – number of blocks per row, column, level
Returns: ijk – Unique identifier of block location
Return type: 1D array of integers
Example
>>> >>> # a 2D grid with 2x2x1 cells >>> ix=np.array([0,1,0,1]) >>> iy=np.array([1,1,0,0]) >>> iz=np.array([0,0,0,0]) >>> ind2ijk(ix,iy,iz,2,2,1) array([2, 3, 0, 1]) >>>
Note
The index IJK start at zero (first block) and ends at nx*ny*nz
-
pygslib.blockmodel.
ix2x
(np.ndarray [long, ndim=1] ix, double xorg, double dx)¶ Calculates the block coordinate (x, y or z)
Return array of coordinates x calculated from indices ix.
Parameters: - ix (1D array of positive integers) – arbitrary index ix (or iy, or iz)
- xorg (float) – origin of coordinate (lower left corner of the block)
- dx (float) – size of the block
Returns: x – Coordinates x corresponding to index ix
Return type: 1D array of floats
See also
Notes
The index starts at zero (first block) If there is a point i with coordinate x[i] < xorg and error will be raised
-
pygslib.blockmodel.
x2ix
(np.ndarray [double, ndim=1] x, double xorg, double dx)¶ Calculates the block index (ix, iy or iz)
Return array of ix indices representing the raw, column or level in which a point with coordinate x is located. To get the three indexes ix, iy, iz you may run this functions three times, changing coordinates to x, y and z, respectively.
You can use this function to find the block which contain a points with arbitrary coordinates.
Parameters: - x (1D array of floats) – arbitrary coordinates x (or y, or z)
- xorg (float) – origin of coordinate (lower left corner of the block)
- dx (float) – size of the block
Returns: ix – Index of the blocks where points with coordinates x are located
Return type: 1D array of integers
Example
>>> >>> a=np.array([2.1,7.8,15.9,20,30,31]) >>> x2ix(a,xorg=2., dx=1) array([ 0, 5, 13, 18, 28, 29]) >>>
Note
The index starts at zero (first block) If there is a point i with coordinate x[i] < xorg and error will be raised
PyGSLIB.vtktools¶
PyGSLIB vtktools, Module with tools to work with VTK.
Copyright (C) 2015 Adrian Martinez Vargas
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>
-
pygslib.vtktools.
GetPointsInPolydata
(object polydata)¶ Returns the point coordinates in a polydata (e.j. wireframe) as a numpy array.
Parameters: polydata (VTK polydata) – vtk object with data, ej. wireframe Returns: p – Points coordinates in polydata object Return type: 2D array of float See also
-
pygslib.vtktools.
SavePolydata
(object polydata, str path)¶ Saves polydata into a VTK XML polydata file (‘*.vtp’)
Parameters: - polydata (VTK polydata) – vtk object with data, ej. wireframe
- path (str) – Extension (*.vtp) will be added if not provided
-
pygslib.vtktools.
SetPointsInPolydata
(object polydata, object points)¶ Set the points coordinates in a VTK polydata object (e.j. wireframe).
Parameters: - polydata (VTK polydata) – vtk object with data, ej. wireframe
- points (2D array) – New points coordinates to be set in the polydata points shape may be equal to [polydata.GetNumberOfPoints(),3]
See also
-
pygslib.vtktools.
delaunay2D
()¶ Creates a triangulated Surface
Parameters: - x,y,z (np.ndarray [double, ndim=1]) – Coordinates of the input points
- Alpha (double, default(0.0)) – For a non-zero alpha value, only edges or triangles contained within a sphere centered at mesh vertices will be output. Otherwise, only triangles will be output.
- Tolerance (double, default(0.001)) – Specify a tolerance to control discarding of closely spaced points. This tolerance is specified as a fraction of the diagonal length of the bounding box of the points.
- constraints (vtkPolydata or None) – constraint polygons, lines or polylines
Returns: Return type: vtkPolyData with wireframe
-
pygslib.vtktools.
dmtable2wireframe
(np.ndarray [double, ndim=1] x, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] z, np.ndarray [long, ndim=1] pid1, np.ndarray [long, ndim=1] pid2, np.ndarray [long, ndim=1] pid3, bint indexone = False, str filename = None)¶ Takes a wireframe defined by two tables and creates a VTK polydata wireframe object.
The input tables are as follow:
- x, y, z : This is the points table
- pid1,pid2,pid3: This is another table defining triangles with three point IDs.
Parameters: - x,y,z (1D array of floats) – coordinates of the points to be tested
- pid1,pid2,pid3 (1D array of integers) – triangles defined by three existing point IDs
- indexone (boolean (Default False)) – If false pid index start at zero, otherwise pid index start at one
- filename (Str (Default None)) – file name and path. If provided the file wireframe is saved to this file.
Returns: surface – wireframe imported
Return type: VTK polydata
Todo
Add variables to triangles (vtk cell) and to points (vtk points)
Note
pid1,pid2,pid3
are the row number in the point table of the three points that form a triangle.pid
indices start at zero of indexone == False, otherwisepid
indices start at oneFor example:
a point table
x y z .0 .0 .0 .0 .0 .0 a triangle table
pid1 pid2 pid3 0 1 2
-
pygslib.vtktools.
getbounds
(object polydata)¶ Returns the bounding box limits x,y,z minimum and maximum
Parameters: polydata (VTK polydata) – vtk object with data, ej. wireframe Returns: xmin,xmax,ymin,ymax,zmin,zmax – The geometry bounding box Return type: float
-
pygslib.vtktools.
grid2vtkfile
(str path, np.ndarray [double, ndim=1] x, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] z, object data)¶ Saves data in a Vtk Rectilinear Grid file
Parameters: - path (str) – file path (relative or absolute) and name
- x,y,z (np.ndarray) – coordinates of the points
- data (array like) – array with variable values.
-
pygslib.vtktools.
loadSTL
(str filenameSTL)¶ Load a STL wireframe file
Parameters: filenameSTL (file path) – Returns: polydata Return type: VTK Polydata object
-
pygslib.vtktools.
loadVTP
(str filenameVTP)¶ Load an XML VTK Polydata file
Parameters: filenameVTP (file path) – Returns: polydata Return type: VTK Polydata object
-
pygslib.vtktools.
partialgrid2vtkfile
(str path, np.ndarray [double, ndim=1] x, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] z, double DX, double DY, double DZ, object var, object varname)¶ Saves data in the cells of a VTK Unstructured Grid file
Parameters: - path (str) – file path (relative or absolute) and name
- x,y,z (np.ndarray) – coordinates of the points
- DX,DY,DZ (float) – block sizes
- data (array like) – array with variable values.
-
pygslib.vtktools.
partialgrid2vtkfile_p
(str path, np.ndarray [double, ndim=1] x, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] z, double DX, double DY, double DZ, object var, object varname)¶ Saves data in the points of a VTK Unstructured Grid file
Parameters: - path (str) – file path (relative or absolute) and name
- x,y,z (np.ndarray) – coordinates of the points
- DX,DY,DZ (float) – block sizes
- data (array like) – array with variable values.
-
pygslib.vtktools.
pointinsolid
(object surface, np.ndarray [double, ndim=1] x, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] z, double tolerance = .000001)¶ Finds points inside a closed surface using vtkSelectEnclosedPoints
Parameters: - surface (VTK polydata) – this may work with any 3D object…, no necessarily triangles
- x,y,z (1D array of floats) – coordinates of the points to be tested
Returns: inside – Indicator of point inclusion with values [0,1] 0 means that the point is not inside 1 means that the point is inside
Return type: 1D array of integers
See also
Note
It is assumed that the surface is closed and manifold. Note that if this check is not performed and the surface is not closed, the results are undefined.
TODO: add check to verify that a surface is closed
-
pygslib.vtktools.
pointquering
(object surface, double azm, double dip, np.ndarray [double, ndim=1] x, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] z, int test)¶ Find points inside, over and below surface/solid
Parameters: - surface (VTK polydata) – this may work with any 3D object…, no necessarily triangles
- dip (azm,) – rotation defining the direction we will use to test the points azm 0 will point north and dip positive meas downward direction (like surface drillholes)
- x,y,z (1D array of floats) – coordinates of the points to be tested
- test (integer) – If
test==1
it queries points inside closed surface. It only works with closed solids. Iftest==2
it queries point above surface. Iftest==3
it queries point below surface. Iftest==4
it queries point above and below the surface.test==4
is similar totest==1
but it works with open surfaces.
Returns: - inside (1D array of integers) – Indicator of point inclusion with values [0,1] 0 means that the point is not inside, above or below surface 1 means that the point is inside, above or below surface
- p1 (1D array size 3) – rays to show where are pointing in the output
See also
Note
The test 1 requires the surface to be close, to find points between two surfaces you can use test=4 The test, 2,3 and 4 use raycasting and rays orientation are defined by azm and dip values. The results will depend on this direction, for example, to find the points between two open surfaces you may use ray direction perpendicular to surfaces If a surface is closed the points over and below a surface will include points inside surface
-
pygslib.vtktools.
points2vtkfile
(str path, np.ndarray [double, ndim=1] x, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] z, object data)¶ Save points in vtk file
Parameters: - path (str) – file path (relative or absolute) and name
- x,y,z (np.ndarray) – coordinates of the points
- data (array like) – array with variable values.
-
pygslib.vtktools.
vtk_raycasting
(object surface, object pSource, object pTarget)¶ Intersects a line defined by two points with a polydata vtk object, for example a closed surface.
This function is to find intersection and inclusion of points within, below or over a surface. This is handy to select points or define blocks inside solids.
Parameters: - surface (VTK polydata) – this may work with any 3D object…, no necessarily triangles
- pTarget (pSource,) – point location defining a ray
Returns: - intersect (integer, with values -1, 0, 1) – 0 means that no intersection points were found
1 means that some intersection points were found
-1 means the ray source lies within the surface
pointsIntersection (tuple of tuples with 3 float) – this is a tuple with coordinates tuples defining intersection points
pointsVTKIntersectionData (an Vtk polydata object) – similar to pointsIntersection but in Vtk polydata format
See also
vtk_show()
,loadSTL()
Note
This Code was modified from the original code by Adamos Kyriakou published in https://pyscience.wordpress.com/
PyGSLIB.nonlinear¶
PyGSLIB nonlinear, Module with function for nonlinear geostatistics
Copyright (C) 2015 Adrian Martinez Vargas
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>
Code based on paper
A Step by Step Guide to Bi-Gaussian Disjunctive Kriging, by Julian M. Ortiz, Bora Oz, Clayton V. Deutsch Geostatistics Banff 2004 Volume 14 of the series Quantitative Geology and Geostatistics pp 1097-1102
Warning
This module is experimental, not tested and we are getting some validation errors in the anamorphosis with declustering data.
-
pygslib.nonlinear.
Y2Z
(np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=1] PCI, double zamin, double yamin, double zpmin, double ypmin, double zpmax, double ypmax, double zamax, double yamax, double r=1)¶ Gaussian (Y) to raw (Z) transformation
This is a convenience functions. It calls H=recurrentH(K,Y) and then returns Z = expand_anamor(PCI,H,r). K is deduced from PCI.shape[0]. It also linearly interpolates the values out of the control points.
Parameters: - PCI (1D array of float64) – hermite coefficient
- y (1D array of float64) – Gaussian values
- r (float64, default 1) – the support effect
- ypmin,zpmin,ypmax,zpmax (float64) – z, y practical minimum and maximum
- yamin,zamin,yamax,zamax (float64) – z, y authorized minimum and maximum
Returns: Z – raw values corresponding to Y
Return type: 1D array of floats
See also
-
pygslib.nonlinear.
Z2Y_linear
()¶ Z2Y_linear(np.ndarray [double, ndim=1] z, np.ndarray [double, ndim=1] zm, np.ndarray [double, ndim=1] ym, double zamin, double yamin, double zpmin, double ypmin, double zpmax, double ypmax, double zamax, double yamax)
Raw (Z) to Gaussian (Y) transformation
Given a set of pairs [zm,ym] representing an experimental Gaussian anamorphosis, this functions linearly interpolate y values corresponding to z within the [zamin, zamax] intervals
Parameters: - z (1D array of float64) – raw (Z) values where we want to know Gaussian (Y) equivalent
- zm,ym (1D array of float64) – tabulated [Z,Y] values
- zpmin, ypmax, zpmax (ypmin,) – z, y practical minimum and maximum
- zamin, yamax,zamax (yamin,) – z, y authorized minimum and maximum
Returns: Z – raw values corresponding to Y
Return type: 1D array of float64
See also
-
pygslib.nonlinear.
anamor
()¶ - anamor(z, w, ltail=1, utail=1, ltpar=1, utpar=1, K=30,
- zmin=None, zmax=None, ymin=None, ymax=None, zamin=None, zamax=None, zpmin=None, zpmax=None, ndisc = 1000, **kwargs)
Funtion to do interactive fitting of gaussian anamorphosis in point support as it follow
- create experimental pairs z,y from input data
- create an arbitrary secuence of Y ndisc values in interval [ymin, ymax]
- backtransform Y to obtain pairs Z, Y. You can define arbitrary maximum and minimum practical values beyond data limits and extrapolation functions
- calculate gaussian anamorphosis with hermite polynomials and hermite coeficients
- calculate variance from hermite coeficients
- asign or calculate the authorized and practical intervals of the gaussian anamorphosis
You may change the parameters ltail, utail, ltpar, utpar, K, zmin, zmax, ymin, ymax, zamin, zamax, zpmin, zpmax until the variance calculated with hermite coefficients aproximates the declustered experimental variance.
Note that maximum and minimum values zmin, zmax can be arbitrary and you use zpmin, zpmax to create a transformation table within your desired intervals.
Parameters: - z,w (1D numpy arrays of floats) – Variable and declustering weight.
- utail, ltpar, utpar (ltail,) – extrapolation functions, as used in pygslib.nonlinear.backtr and gslib.__dist_transf.backtr
- K (integer (optional, default 30)) – number of hermite polynomials.
- zmax (zmin,) – minimum and maximum value in the lookup table used to fit the anamorphosis if None data limits are used, is these values are beyond data limits interpolation functions are used to deduce Z values correponding to Y values
- ymax (ymin,) – minimum and maximum gaussian values. If nont defined y equivalent of data limits is used. Good values are -5 and 5, these are gaussian with low probability
- zamax, zpmin, zpmax (zamin,) – Authorized and practical z value intervals. Authorized intervals are where the gaussian anamorphosis do not fluctuate. Beyond authorized interval linear extrapolation to practical interval are used.
- ndisc (floats (optional, default 1000)) – number of discretization intervals between ymin and ymax.
- **kwargs (more parameters) – extra parameters with ploting style
- Returns –
- H (PCI,) –
- zana, gauss (raw,) – z experimental extended, calculate with experimental anamorphosis and gaussian values
- Z (array of float) – Z of the gaussian anamorphosis pair [Z,Y] corrected
- P (array of float) – Probability P{Z<c}
- , PCI_var (raw_var) – variance experimental (declustered) and calculated from hermite polynomials
- fig (matplotlib figure) – gaussian anamorphosis plot
- Note –
- pair [Z,P] defines the CDF in point support (The) –
-
pygslib.nonlinear.
anamor_blk
()¶ - anamor_blk( PCI, H, r, gauss, Z,
- ltail=1, utail=1, ltpar=1, utpar=1, raw=None, zana=None, **kwargs)
Funtion to do obtain gaussian anamorphosis in block support from punctual anamorphosis
Parameters: - H (PCI,) – hermite polynomial coefficients and monomials
- r (float) – support correlation coefficient
- Z (gauss,) – pair of gaussian and corrected Z anamorphosis
- utail, ltpar, utpar (ltail,) – extrapolation functions, as used in pygslib.nonlinear.backtr and gslib.__dist_transf.backtr
- zana (raw,) – extended experimental and non-transformed gaussian z calculated with hermite polynomials
- **kwargs (more parameters) – extra parameters with ploting style
- Returns –
- ZV (array of float) – corrected Z values in point support
- PV (array of float) – Probability P{Z<c}
- fig (matplotlib figure) – gaussian anamorphosis plot
- Note –
- pair [ZV,PV] defines the CDF in block support (The) –
-
pygslib.nonlinear.
anamor_raw
(z, w, K=30, **kwargs)¶ Non-interactive gaussian anamorphosis calculated directly from data
Parameters: - z,w (1D numpy arrays of floats) – Variable and declustering weight.
- K (integer (optional, default 30)) – number of hermite polynomials.
- **kwargs (more parameters) – extra parameters with ploting style
- Returns –
- H (PCI,) –
- zana, gauss (raw,) – z experimental extended, calculate with experimental anamorphosis and gaussian values
- , PCI_var (raw_var) – variance experimental (declustered) and calculated from hermite polynomials
- fig (matplotlib figure) – gaussian anamorphosis plot
- Note –
-
pygslib.nonlinear.
backtr
()¶ nscore(z, transin,transout, getrank)
Normal score transformation, as in GSLIB
Parameters: - y (1D numpy array of floats) – Gaussian values.
- transin,transout (1D numpy arrays of floats) – transformation table as obtained in function ttable
- ltail (integer) –
- utail (integer) –
- ltpar (float) –
- utpar (float) –
- zmin (float) –
- zmax (float) –
- getrank (Boolean default False) –
Returns: - z (1D numpy array of floats) – raw transformation of gaussian variable y
- Note
- This function uses gslib.__dist_transf.backtr
Calculate authorized intervals for gaussian anamorphosis with hermite polynomials
Parameters: - zraw, gauss (zana,) – z values from anamorphosis, z values experimental and gaussian values. These arrays may have same shape and only one dimension.
- zpmax (zpmin,) – practical minimum and maximum values
- Returns –
- j, ii, jj (i,) – index of zamin, zamax, zpmin, zpmax on arrays zana, zraw, gauss
- Note –
- is a helper function used by pygslib.nonlinear.anamor (This) –
Calculate authorized intervals for gaussian anamorphosis with hermite polynomials with block support.
Parameters: - gauss (zana,) – z values in block support from anamorphosis and gaussian values. These arrays may have same shape and only one dimension.
- zpmax (zpmin,) – practical minimum and maximum values
- Returns –
- j, ii, jj (i,) – index of zamin, zamax, zpmin, zpmax on arrays zana, zraw, gauss
- Note –
- is a helper function used by pygslib.nonlinear.anamor_blk. (This) –
- index ii, jj of zpmin, zpmax are dum and set as jj = zana.shape[0]-1 and ii = 0 (The) –
-
pygslib.nonlinear.
expand_anamor
(np.ndarray [double, ndim=1] PCI, np.ndarray [double, ndim=2] H, double r=1.)¶ Expands the anamorphosis function, that is \(Z = \sum_p(PSI_p*r^p*Hp(Yv))\)
r is the support effect. If r = 1 Z with point support will returned.
Parameters: - PCI (1D array of floats) – hermite coefficient
- H (2D array of floats) – Hermite monomials H(i,y) with shape [K+1,len(y)]. See recurrentH
- r (float, default 1) – the support effect
Returns: PCI – Hermite coefficients or PCI
Return type: 1D array of floats
See also
-
pygslib.nonlinear.
f_covar_ZvZv
(double ro, double s, double r, np.ndarray [double, ndim=1] PCI, double Covar_ZvZv=0)¶ This is an internal function used to deduce the coefficients ro = covar(Yv, Yv*). This function represents the expression:
\(Covar (Zv,Zv^*) = \sum PCI^2 * r^n * s^n * ro^n\)ro is necessary to account for the conditional bias in the information effect.
see “The information effect and estimating recoverable reserves” J. Deraisme (Geovariances), C. Roth (Geovariances) for more information
Parameters: - ro, s (r,) – support effect and information effect coefficients.
- PCI (1D array of floats) – hermite coefficients
- Covar_ZvZv (float) – Block covariance (correlation) between true Zv and estimate Zv*
Note
\(Covar (Zv,Zv^*) = var(Zv) - Kriging variance - LaGrange multiplier\)
see expression 7.47, page 97 on Basic Linear Geostatistics by Margaret Armstrong.
In case of information effect this can be calculated with a dummy dataset in a single block representing future information, for example blast holes.
Note that the slop of regression is
\(p = Covar (Zv,Zv*) / (Covar (Zv,Zv*) - LaGrange multiplier)\)
-
pygslib.nonlinear.
f_var_Zv
(double r, np.ndarray [double, ndim=1] PCI, double Var_Zv=0)¶ This is an internal function used to deduce the coefficients r (or s), representing the support effect. It defines the relations:
\(Var(Zv) = \sum PCI^2 * r^(n*2)\)
or
\(Var(Zv*) = \sum PCI^2 * s^(n*2)\)
r is necessary to account for information effect s is necessary to account for smoothing in the information effect.
see “The information effect and estimating recoverable reserves” J. Deraisme (Geovariances), C. Roth (Geovariances) for more information
Parameters: - r (float64) – r or s coefficient representing support effect of Zv (or Zv*)
- PCI (1D array of floats) – hermite coefficients
- Var_Zv (float64) – Block Variance var(Zv) or var(Zv*)
Note
var(Zv) can be calculated as C(0)-gamma(v,v) or C(v,v) see function block_covariance
var(Zv*) = var(Zv) - Kriging variance - 2*LaGrange multiplier
In case of information effect this can be calculated with a dummy dataset in a single block representing future information, for example blast holes.
-
pygslib.nonlinear.
findcontrolpoints
(zana, zraw, gauss, zpmin, zpmax, zamin, zamax)¶ Find the index of predefined authorized intervals in the arrays zana, zraw and gauss
Parameters: - zraw, gauss (zana,) – z values from anamorphosis, z values experimental and gaussian values. These arrays may have same shape and only one dimension.
- zpmax, zamin, zamax (zpmin,) – practical minimum, practical maximum, authorized minimum and authorized maximum
- Returns –
- j, ii, jj (i,) – index of zamin, zamax, zpmin, zpmax on arrays zana, zraw, gauss
- Note –
- is a helper function used by pygslib.nonlinear.anamor. (This) –
-
pygslib.nonlinear.
fit_PCI
()¶ fit_PCI(np.ndarray [double, ndim=1] z, np.ndarray [double, ndim=1] y, np.ndarray [double, ndim=2] H, float meanz=np.nan)
Fits the hermite coefficient (PCI)
Parameters: - z (1D array of float64) – Raw values sorted
- y (1D array of float64) – Gaussian values calculated for the right part of the bin.
- meanz (float64, default np.nan) – mean of z, if NaN then the mean will be calculated as np.mean(z)
Returns: - PCI (1D array of floats) – Hermite coefficients or PCI
- g (1D array of floats) – pdf value (g[i]) corresponding to each gaussian value (y[i])
See also
Note
PCI[0]=mean(z)
andsum=(PCI[1...n]^2)
. To validate the fit calculate the variance with the functionvar_PCI()
and compare it with the experimental variance of z. You may also validate the fit by calculatingerror= z-PHI(y)
, wherePHI(y)
are the z’ values calculated with the hermite polynomial expansion.
-
pygslib.nonlinear.
get_r
(double Var_Zv, np.ndarray [double, ndim=1] PCI)¶ This function deduces the value of the support effect coefficient r or the information effect coefficient, smoothing component, s defined by the equations:
\(Var(Zv) = \sum PCI^2 * r^(n*2)\)
and
\(Var(Zv*) = \sum PCI^2 * s^(n*2)\)
The value of r is deduced by finding the root of the equation f_var_Zv, using the classic Brent method (see scipy.optimize.brentq)
Parameters: - PCI (1D array of float64) – hermite coefficient
- Var_Zv (float64) – Block variance
Returns: r – support effect coefficient r or information effect coefficient s
Return type: float64
See also
f_var_Zv()
,fit_PCI()
,anamor()
,scipy.optimize.brentq()
Note
var(Zv) can be calculated as C(0)-gamma(v,v) or C(v,v) see function block_covariance
var(Zv*) = var(Zv) - Kriging variance - 2*LaGrange multiplier
In case of information effect this can be calculated with a dummy dataset in a single block representing future information, for example blast holes.
-
pygslib.nonlinear.
get_ro
(double Covar_ZvZv, np.ndarray [double, ndim=1] PCI, double r, double s)¶ This function deduces the information effect coefficient, conditional bias component, ro defined by the equations:
\(Covar (Zv,Zv^*) = \sum PCI^2 * r^n * s^n * ro^n\)ro is necessary to account for the conditional bias in the information effect.
The value of ro is deduced by finding the root of the equation f_covar_ZvZv, using the classic Brent method (see scipy.optimize.brentq)
Parameters: - s (r,) – support effect and information effect (smoothing component)
- PCI (1D array of floats) – hermite coefficients
- Covar_ZvZv (float) – Block covariance (correlation) between true Zv and estimate Zv*
Note
\(Covar (Zv,Zv*) = var(Zv) - Kriging variance - LaGrange multiplier\)
see expression 7.47, page 97 on Basic Linear Geostatistics by Margaret Armstrong.
In case of information effect this can be calculated with a dummy dataset in a single block representing future information, for example blast holes.
Note that the slop of regression is
\(p = Covar (Zv,Zv^*) / (Covar (Zv,Zv^*) - LaGrange multiplier)\)
-
pygslib.nonlinear.
gtcurve
()¶ TODO:
-
pygslib.nonlinear.
nscore
(z, transin, transout, getrank)¶ Normal score transformation, as in GSLIB
Parameters: - z (1D numpy array of floats) – Variable to transform.
- transin,transout (1D numpy arrays of floats) – transformation table as obtained in function ttable
Returns: - y (1D numpy array of floats) – normal score transformation of variable z
- Note
- This function uses gslib.__dist_transf.nscore
-
pygslib.nonlinear.
plotgt
()¶ TODO:
-
pygslib.nonlinear.
recurrentH
(np.ndarray [double, ndim=1] y, int K=30)¶ Calculates the hermite polynomials with the recurrent formula
Parameters: - y (1D array of float64) – Gaussian values calculated for the right part of the bin.
- K (int32, default 30) – Number of hermite polynomials
Returns: H – Hermite monomials H(i,y) with shape [K+1,len(y)]
Return type: 2D array of float64
See also
pygslib.gslib.__dist_transf.anatbl()
Note
The y values may be calculated on the right side of the bin, as shown in fig VI.13, page 478 of Mining Geostatistics: A. G. Journel, Andre G. Journel, C. J. The function pygslib.gslib.__dist_transf.anatbl was prepared to provide these values, considering declustering weight if necessary.
The results from pygslib.gslib.__dist_transf.ns_ttable are inappropriate for this calculation because are the mid interval in the bin.
-
pygslib.nonlinear.
stats
(z, w)¶ Reports some basic stats using declustering weights
Parameters: - z,w (1D arrays of floats) – Variable and declustering weight.
- iwt (boolean default True) – If True declustering weights will be used to calculate statistics
- report (boolean default True) – If True a printout will be produced
- Returns –
- xcvr,xmen,xvar (xmin,xmax,) – minimum, maximum, coefficient of variation, media and variance
- Note –
- function uses gslib.__plot.probplt to obtaining stats (This) –
-
pygslib.nonlinear.
ttable
(z, w)¶ Creates a transformation table.
Parameters: - z,w (1D numpy arrays of floats) – Variable and declustering weight.
- Returns –
- transin,transout (1D numpy arrays with pairs of raw and gaussian values) –
- Note –
- function uses gslib.__dist_transf.ns_ttable (This) –
-
pygslib.nonlinear.
var_PCI
()¶ var_PCI(np.ndarray [double, ndim=1] PCI)
Calculates the variance from hermite coefficient (PCI)
Parameters: PCI (1D array of float64) – hermite coefficient Returns: var – variance calculated with hermite polynomials Return type: float64 See also
Note
The output may be used for validation of the PCI coefficients, it may be close to the experimental variance of z.
PyGSLIB.charttable¶
Charttable: HTML Tables with bars charts for exploratory data analysis
Copyright 2016, Adrian Martinez Vargas. Licensed under GPL.
-
class
pygslib.charttable.
HTMLTable
(data)¶ Define and create tables with special formats: bars and coloured columns. This is useful for exploratory data analysis
-
table
¶ dict – Table with special columns
-
html
¶ str – HTML code of the table with special columns
-
nrows
¶ int – Number of rows
-
columns
¶ list of str – Columns labels. The table will be plot in the same order
Note
This is intended to plot nice tables with special format in Ipython or Jupyter notebooks. The formats are
- simple text
- colored cells
- colored bars
The parameters for colour, bar size and text can be different, for example, a color bar can contain color as a rock type, bar size as gold grade and text as Silver grade. In other words 3 different source of information can be represented in one column
-
addcolumn
(name, text, bar=None, colour=None, overwrite=False)¶ Add a special column to self.table
Parameters: - name (str) – column name
- text (1D array like) – array of textual values for the column
- bar (1D array like) – array with bar sizes (0-100)
- colour (1D array like) – array of colours compatibles to the class Colour
- overwrite (boolean) – set to True to overwrite column with existing name
-
clear_table
()¶ Removes all data from self.Table and reassign index from self.data
-
display
()¶ Display the Table in Ipython/Jupyter notebooks
-
make_html
()¶ Makes the HTML code of the table on self.html
-
-
class
pygslib.charttable.
Leyend_cat
(values, cmax='red', cmin='blue', undef='grey', convert=None)¶ - Leyend_cat(values
- cmax = Color(“red”), cmin = Color(“blue”), undef = Color(“grey”), nval= 5)
Define a legend class for categories
Parameters: - values (array) – Categories with any arbitrary value.
- cmax (cmin,) – Minimum and maximum colours for the colour scale.
- undef (instance of class Color) – Color for undefined values (nan or our of range).
- nval (numerical value) – Number of intervals in the colour scale.
-
c
¶ Numpy array of Color instances – colours of the colour scale. c[0] is for undefined values
-
v
¶ Numpy array of numeric values – corresponding value interval [v(i),v(i+1)[ for the colour c(i). [v(n), inf[ will take the colour c(n).
Note
Use function apply_colour to get the color for each value in an arbitrary array
-
apply_colour
(data)¶ Returns colour corresponding to each value in an array
Examples
>>> cc = my_leyend_num.apply_colour(['granite', 'basalt', 'NA'])
-
class
pygslib.charttable.
Leyend_num
(vmin=0, vmax=10, cmax='red', cmin='blue', undef='grey', nval=5, convert=None, log=True)¶ - Leyend_num(vmin=0,
- vmax=10, cmax = Color(“red”), cmin = Color(“blue”), undef = Color(“grey”), nval= 5)
Define a legend class for continuous numerical intervals
Parameters: - vmax (vmin,) – Minimum and maximum values for the colour scale.
- cmax (cmin,) – Minimum and maximum colours for the colour scale.
- undef (instance of class Color) – Color for undefined values (nan or our of range).
- nval (numerical value) – Number of intervals in the colour scale.
-
c
¶ Numpy array of Color instances – colours of the colour scale. c[0] is for undefined values
-
v
¶ Numpy array of numeric values – corresponding value interval [v(i),v(i+1)[ for the colour c(i). [v(n), inf[ will take the colour c(n).
Note
Use function apply_colour to get the color for each value in an arbitrary array
-
apply_colour
(data)¶ Returns colour corresponding to each value in an array
Examples
>>> cc = my_leyend_num.apply_colour([77, 87.9, 77])