raytraverse.evaluate

BaseMetricSet

class raytraverse.evaluate.BaseMetricSet(vec, omega, lum, vm, metricset=None, scale=179.0, omega_as_view_area=True, guth=True, warn=False, **kwargs)[source]

Bases: object

object for calculating metrics based on a view direction, and rays consisting on direction, solid angle and luminance information

by encapsulating these calculations within a class, metrics with redundant calculations can take advantage of cached results, for example dgp does not need to recalculate illuminance when it has been directly requested. all metrics can be accessed as properties (and are calculated just in time) or the object can be called (no arguments) to return a np.array of all metrics defined in “metricset”

Parameters
  • vm (raytraverse.mapper.ViewMapper) – the view direction

  • vec (np.array) – (N, 3) directions of all rays in view

  • omega (np.array) – (N,) solid angle of all rays in view

  • lum (np.array) – (N,) luminance of all rays in view (multiplied by “scale”)

  • metricset (list, optional) – keys of metrics to return, same as property names

  • scale (float, optional) – scalefactor for luminance

  • omega_as_view_area (bool, optional) – take sum(omega) as view area. if false corrects omega to vm.area

  • warnings (bool, optional) – if False, suppresses numpy warnings (zero div, etc…) when accessed via __call__

  • kwargs – additional arguments that may be required by additional properties

allmetrics = ['illum', 'avglum', 'loggcr', 'gcr', 'pwgcr', 'logpwgcr', 'density', 'avgraylum', 'pwavglum', 'maxlum']
safe2sum = {'avglum', 'density', 'illum'}
defaultmetrics = ['illum', 'avglum', 'loggcr']

available metrics (and the default return set)

classmethod check_metrics(metrics, raise_error=False)[source]

returns list of valid metric names from argument if raise_error is True, raises an Atrribute Error

classmethod check_safe2sum(metrics)[source]

checks if list if metrics is safe to compute for seperate sources before adding

property vec
property lum
property omega
property ctheta

cos angle between ray and view

property radians

angle between ray and view

property pos_idx
property pweight
property pweighted_area
property illum

illuminance

property avglum

average luminance

property maxlum

average luminance

property pwavglum

position weighted average luminance

property avgraylum

average luminance (not weighted by omega

property gcr

a unitless measure of relative contrast defined as the average of the squared luminances divided by the average luminance squared

property pwgcr

a unitless measure of relative contrast defined as the average of the squared luminances divided by the average luminance squared weighted by a position index

property logpwgcr

a unitless measure of relative contrast defined as the log of gcr

property loggcr

a unitless measure of relative contrast defined as the log of gcr

property density

MultiLumMetricSet

class raytraverse.evaluate.MultiLumMetricSet(vec, omega, lum, vm, metricset=None, scale=179.0, omega_as_view_area=True, **kwargs)[source]

Bases: BaseMetricSet

object for calculating metrics based on a view direction, and rays consisting on direction, solid angle and luminance information

by encapsulating these calculations within a class, metrics with redundant calculations can take advantage of cached results, for example dgp does not need to recalculate illuminance when it has been directly requested. all metrics can be accessed as properties (and are calculated just in time) or the object can be called (no arguments) to return a np.array of all metrics defined in “metricset”

Parameters
  • vm (raytraverse.mapper.ViewMapper) – the view direction

  • vec (np.array) – (N, 3) directions of all rays in view

  • omega (np.array) – (N,) solid angle of all rays in view

  • lum (np.array) – (N, M) luminance of all rays in view (multiplied by “scale”)

  • metricset (list, optional) – keys of metrics to return, same as property names

  • scale (float, optional) – scalefactor for luminance

  • kwargs – additional arguments that may be required by additional properties

property illum

illuminance

property avglum

average luminance

property avgraylum

average luminance (not weighted by omega)

property gcr

a unitless measure of relative contrast defined as the average of the squared luminances divided by the average luminance squared

MetricSet

class raytraverse.evaluate.MetricSet(vec, omega, lum, vm, metricset=None, scale=179.0, threshold=2000.0, guth=True, tradius=30.0, omega_as_view_area=False, lowlight=False, **kwargs)[source]

Bases: BaseMetricSet

object for calculating metrics based on a view direction, and rays consisting on direction, solid angle and luminance information

by encapsulating these calculations within a class, metrics with redundant calculations can take advantage of cached results, for example dgp does not need to recalculate illuminance when it has been directly requested. all metrics can be accessed as properties (and are calculated just in time) or the object can be called (no arguments) to return a np.array of all metrics defined in “metricset”

Parameters
  • vm (raytraverse.mapper.ViewMapper) – the view direction

  • vec (np.array) – (N, 3) directions of all rays in view

  • omega (np.array) – (N,) solid angle of all rays in view

  • lum (np.array) – (N,) luminance of all rays in view (multiplied by “scale”)

  • metricset (list, optional) – keys of metrics to return, same as property names

  • scale (float, optional) – scalefactor for luminance

  • threshold (float, optional) – threshold for glaresource/background similar behavior to evalglare ‘-b’ paramenter. if greater than 100 used as a fixed luminance threshold. otherwise used as a factor times the task luminance (defined by ‘tradius’)

  • guth (bool, optional) – if True, use Guth for the upper field of view and iwata for the lower if False, use Kim

  • tradius (float, optional) – radius in degrees for task luminance calculation

  • kwargs – additional arguments that may be required by additional properties

defaultmetrics = ['illum', 'avglum', 'loggcr', 'ugp', 'dgp']

available metrics (and the default return set)

allmetrics = ['illum', 'avglum', 'loggcr', 'gcr', 'pwgcr', 'logpwgcr', 'density', 'avgraylum', 'pwavglum', 'maxlum', 'ugp', 'dgp', 'tasklum', 'backlum', 'dgp_t1', 'log_gc', 'dgp_t2', 'ugr', 'threshold', 'pwsl2', 'view_area', 'backlum_true', 'srcillum', 'srcarea', 'maxlum']
safe2sum = {'avglum', 'density', 'illum', 'pwsl2', 'srcillum'}
property src_mask

boolean mask for filtering source/background rays

property task_mask
property sources

vec, omega, lum of rays above threshold

property background

vec, omega, lum of rays below threshold

property source_pos_idx
property threshold

threshold for glaresource/background similar behavior to evalglare ‘-b’ paramenter

property pwsl2

position weighted source luminance squared, used by dgp, ugr, etc sum(Ls^2*omega/Ps^2)

property srcillum

source illuminance

property srcarea

total source area

property maxlum

peak luminance

property backlum

average background luminance CIE estimate (official for some metrics)

property backlum_true

average background luminance mathematical

property tasklum

average task luminance

property dgp
property dgp_t1
property log_gc
property dgp_t2
property ugr
property ugp

//dx.doi.org/10.1016/j.buildenv.2016.08.005

Type

http

FieldMetric

class raytraverse.evaluate.FieldMetric(vec, omega, lum, vm=None, scale=1.0, npts=360, close=True, sigma=0.05, omega_as_view_area=True, **kwargs)[source]

Bases: BaseMetricSet

calculate metrics on full spherical point clouds rather than view based metrics.

Parameters
  • vec (np.array) – (N, 3) directions of all rays

  • omega (np.array) – (N,) solid angle of all rays

  • lum (np.array) – (N,) luminance of all rays (multiplied by “scale”)

  • metricset (list, optional) – keys of metrics to return, same as property names

  • scale (float, optional) – scalefactor for luminance

  • npts (int, optional) – for equatorial metrics, the number of points to interpolate

  • close (bool, optional) – include npts+1 duplicate to draw closed curve

  • sigma (float, optional) – scale parameter of gaussian for kernel estimated metrics

  • omega_as_view_area (bool, optional) – set to true when vectors either represent a whole sphere or a subset that does not match the viewmapper. if False, corrects boundary omega to properly trim to correct size.

  • kwargs – additional arguments that may be required by additional properties

property tp

vectors in spherical coordinates

property phi

interpolated output phi values

property eq_xyz

interpolated output xyz vectors

property avg

overall vector (with magnitude)

property peak

overall vector (with magnitude)

property eq_lum

luminance along an interpolated equator with a bandwidth=sigma

property eq_density

ray density along an interpolated equator

property eq_illum

illuminiance along an interpolated equator

property eq_gcr

cosine weighted gcr along an interpolated equator

property eq_loggc
property eq_dgp

SamplingMetrics

class raytraverse.evaluate.SamplingMetrics(vec, omega, lum, vm, scale=1.0, peakthreshold=0.0001, lmin=0, gcrnorm=8, **kwargs)[source]

Bases: BaseMetricSet

default metricset for areasampler

defaultmetrics = ['avglum', 'loggcr', 'xpeak', 'ypeak']

available metrics (and the default return set)

allmetrics = ['avglum', 'loggcr', 'xpeak', 'ypeak']
property peakvec

average vector (with magnitude) for peak rays

property xpeak

x-component of avgvec as positive number (in range 0-1)

property ypeak

y-component of avgvec as positive number (in range 0-1)

property loggcr

log of global contrast ratio

PositionIndex

class raytraverse.evaluate.PositionIndex(guth=True)[source]

Bases: object

calculate position index according to guth/iwata or kim

Parameters

guth (bool) – if True, use Guth for the upper field of view and iwata for the lower if False, use Kim

positions(vm, vec)[source]

calculate position indices for a set of vectors

Parameters
  • vm (raytraverse.mapper.ViewMapper) – the view/analysis point, should have 180 degree field of view

  • vec (np.array) – shape (N,3) the view vectors to calculate

Returns

posidx – shape (N,) the position indices

Return type

np.array

positions_vec(viewvec, srcvec, up=(0, 0, 1))[source]

retina

raytraverse.evaluate.retina.hpsf(x, fwhm=0.183333)[source]

estimate of human eye point-spread function

from: Yang, Yr., Wanek, J. & Shahidi, M. Representing the retinal line spread shape with mathematical functions. J. Zhejiang Univ. Sci. B 9, 996–1002 (2008). https://doi.org/10.1631/jzus.B0820184

raytraverse.evaluate.retina.inv_hpsf(y, fwhm=0.183333)[source]

inverse of hpsf

raytraverse.evaluate.retina.blur_sun(omega, lmax, lmin=279.33, fwhm=0.183333)[source]

calculate source correction to small bright source

returned value should be multiplied by omega and divides luminance

Parameters
  • omega (Union[float, np.array]) – solid angle in steradians of source

  • lmax (Union[float, np.array]) – maximum radiance in source (cd/m^2)/179

  • lmin (Union[float, np.array], optional) – minimum radiance value to gather after spread (mimic peak extraction of evalglare, but note the different units (cd/m^2)/179

  • fwhm (Union[float, np.array], optional) – full width half max of Lorentzian curve (radius in degrees) default is 11 arcmin.

Returns

correction factor – value should be multiplied by omega and divides luminance

Return type

Union[float, np.array]

raytraverse.evaluate.retina.rgcf_density_on_meridian(deg, mi)[source]

retinal ganlgion cell field density along a meridian as a functional best fit.

the field density accounts for the input region of the ganglion cell to account for displaced ganglion cells. This value is estimate from cone density and the inferred density of midget ganglion cells. see Watson (2014) for important caveats.

Parameters
  • deg (np.array) – eccentricity in degrees along merdian

  • mi (int) – meridian index. [0, 1, 2, 3] for Temporal, Superior, Nasal, Inferior.

Returns

1d array of retinal ganglion cell density along a merdian

Return type

np.array

raytraverse.evaluate.retina.rgc_density_on_meridian(deg, mi)[source]

retinal ganglion cell density along a merdian as a linear interpolation between non-zero measurements

As opposed to the field density this estimate the actual location of ganglion cells, which could be important to consider for intrinsically photosensitive cells. These are (partially?) responsible for pupillary response. However, even iprgc (may?) receive signals from rods/cones

Parameters
  • deg (np.array) – eccentricity in degrees along merdian

  • mi (int) – meridian index. [0, 1, 2, 3] for Temporal, Superior, Nasal, Inferior.

Returns

1d array of retinal ganglion cell density along a merdian

Return type

np.array

raytraverse.evaluate.retina.rgcf_density_xy(xy, func=<function rgcf_density_on_meridian>)[source]

interpolate density between meridia, selected by quadrant

Parameters
  • xy (np.array) – xy visual field coordinates on a disk in degrees (eccentricity 0-90 from fovea)

  • func (callable) – density function along a meridian, takes r in degrees and an axes index: [0, 1, 2, 3] for Temporal, Superior, Nasal, Inferior.

Returns

1d array of single eye densities

Return type

np.array

raytraverse.evaluate.retina.binocular_density(xy, func=<function rgcf_density_on_meridian>)[source]

average denisty between both eyes.

Parameters
  • xy (np.array) – xy visual field coordinates on a disk (eccentricity 0-1 from fovea)

  • func (callable) – density function along a meridian, takes r in degrees and an axes index: [0, 1, 2, 3] for Temporal, Superior, Nasal, Inferior. coordinates are for the visual field.

Returns

1d array of average binocular densities

Return type

np.array

raytraverse.evaluate.retina.rgcf_density(xy)[source]

retinal ganglion cell field density

Parameters

xy (np.array) – xy visual field coordinates on a disk (eccentricity 0-1 from fovea)

Returns

1d array retinal ganglion cell field density according to model by Watson

Return type

np.array

raytraverse.evaluate.retina.rgc_density(xy)[source]

retinal ganglion cell density (includes displaced ganglion cells)

Parameters

xy (np.array) – xy visual field coordinates on a disk (eccentricity 0-1 from fovea)

Returns

1d array retinal ganglion cell density according to measurements by Curcio

Return type

np.array

hvsgsm

raytraverse.evaluate.hvsgsm.gss_compute(imgs, illums=None, save=False, suffix='_rg.hdr', outdir=None, **kwargs)[source]

initialize a GSS instance and compute multiple images in parallel

Parameters
  • imgs (Sequence) – list of image file paths to compute. images should by 180 degree HDR angular fisheyes scaled at 1/179 cd/m^2 (standard radiance HDR)

  • illums (Sequence, optional) – If images onnly contain glare sources but not an accurate background provide illuminance calculated seperately (like eDGPs process)

  • save (bool, optional) – If true saves an image of the glare response

  • suffix (str, optional) – suffix to append to image when save is True

  • outdir (str, optional) – save response images to a different directory

  • kwargs – passed to GSS initialization

Returns

GSS – glare sensation scores for all images (in order given)

Return type

list

raytraverse.evaluate.hvsgsm.process_gss(img, illum, ins, outf=False, outdir=None, suffix='_rg.hdr')[source]

called by gss_compute in parallel

raytraverse.evaluate.hvsgsm.f_b(b, c, phi)[source]

component of point spread function

J.K. Ijspeert, T.J.T.P. Van Den Berg, H. Spekreijse, An improved

mathematical description of the foveal visual point spread function with parameters for age, pupil size and pigmentation, Vision Research, Volume 33, Issue 1, 1993,Pages 15-20, ISSN 0042-6989, https://doi.org/10.1016/0042-6989(93)90053-Y.

raytraverse.evaluate.hvsgsm.l_b(b, c, phi)[source]

component of line spread function

J.K. Ijspeert, T.J.T.P. Van Den Berg, H. Spekreijse, An improved

mathematical description of the foveal visual point spread function with parameters for age, pupil size and pigmentation, Vision Research, Volume 33, Issue 1, 1993,Pages 15-20, ISSN 0042-6989, https://doi.org/10.1016/0042-6989(93)90053-Y.

GSS

class raytraverse.evaluate.GSS(view=None, gst=0, age=40, f=16.67, scale=179, pigmentation=0.106, fwidth=10, psf=True, adaptmove=True, directmove=True, raw=False)[source]

Bases: object

calculate GSS for images with angular fisheye projection

application of model described in:

A GENERIC GLARE SENSATION MODEL BASED ON THE HUMAN VISUAL SYSTEM Vissenberg, M.C.J.M., Perz, M., Donners, M.A.H., Sekulovski, D. Signify Research, Eindhoven, THE NETHERLANDS gilles.vissenberg@signify.com DOI 10.25039/x48.2021.0P23

see methods for citations associated with each step in model.

the model requires the following steps:

Done when setting an image with a new resolution:

  1. calculate solid angle of pixels

  2. calculate eccentricity from guth position idx

Steps for applying model to an image:

  1. calculate eye illuminance from image

  2. mask non-glare source pixels (not described in model, fixed thresh)

  3. calculate pupil area and diameter

  4. calculate global retinal irradiance

  5. calculate incident retinal irradiance of glare sources

  6. apply PSF to (5)

  7. apply movement affecting adaptation to (6)

  8. apply movement affecting direct response to (6)

  9. calculate local adaptation using (7)

  10. calculate V/V_m photoreceptor response (8)

  11. calculate receptor field response to (10) as DoG

  12. normalize field response with logistic

  13. sum GSS and apply position weighting

Parameters
  • view – can be None, a view file, a ViewMapperr, or an hdrimage with a valid view specification (must be -vta)

  • gst – glare source threshold (cd/m^2)

  • age – age of observer

  • f – eye focal length

  • scale – factor to apply to raw pixel values to convert to cd/m^2

  • pigmentation

    from Ijspeert et al. 1993:

    mean for blue eyes: 0.16 brown eyes: 0.106 dark brown eyes: 0.056

  • fwidth (Union[int, float], optional) – the width of the frame for psf

  • psf (bool, optional) – apply pointspread function for light arriving at retina

  • adaptmove (bool, optional) – apply involuntary eye movement effect on local adaptation

  • directmove (book, optional) – apply involuntary eye movement effect on direct cone response

  • raw (bool, optional) – do not weight results, used for calibration

Notes

set self.lum, either by initializing with an image, or with the parameter setter, then compute:

gss = GSS("img.hdr")
gss.lum = "img.hdr"
score = gss.compute()

additional images can be loaded and computed with the parameter setter by calling images with the same resolution and view size on an initialized object, subsantial re-computation can be avoided.

Alternatively, to get access to process arrays or to override pupil adaptation and or isolating glare sources:

e_g, pupa, pupd = self.adapt(ev_eye)
img_gs = self.get_glare_sources()
r_g, parrays = self.glare_response(img_gs, e_g, pupa, pupd,
return_arrays=True)

For processing multiple images with the same GSS initialization in parallel, see hvsgsm.gss_compute()

emax = 0.12
emin = 0.009
fr_a = 22
fr_b = 0.25
fr_k = 0.67
norm = 4
contrast = 0.8
adapt(ev_eye=None)[source]

step 1 in compute, adapt eye to image

get_glare_sources()[source]

step 2 in compute, isolate glare sources

glare_response(img_gs, e_g, pupa, pupd, return_arrays=False)[source]

step 3 in compute, apply steps of Vissenberg et al. model

Parameters
  • img_gs (np.array) – representing all glare sources

  • e_g (float) – global retinal irradiance

  • pupa (float) – pupil area (mm^2)

  • pupd (float) – pupil diameter (mm)

  • return_arrays (bool, optional) – if True returns second value with dict of process arrays else return r_w only

Returns

  • r_w (np.array) – weighted glare response for entire retina as represented by image

  • parrays (dict, optional) – with returned_arrays=True keys: retinal_irrad, psf, adapt_eye_movement, direct_eye_movement, local_adaptation, response_ratio, response_lin, response_log

compute(save=None, ev_eye=None)[source]

apply glare sensation model to loaded image

Parameters
  • save (str) – if given save response image to file specified (.hdr)

  • ev_eye (float, opttional) – externally calculated Ev

Return type

float

property ecc
property lum
property res

resolution, set via lum

property vecs

directions, set via lum

property omega

solid angle, set via lum

property mask

view mask, set via lum

property ctheta

cos between vectors and view direction, set via lum

property sigma_c

position index scaled to eccentricity .009-.12 (used in field_response)

Note that this differs from the implementation dscribed by Vissenberg and incorporates KIM below the horizon

property vm
pupil(ev)[source]

calculate pupil area

Based on: Donners, Maurice & Vissenberg, Michel & Geerdinck, L.M. & Broek-Cools, J. (2015). A PSYCHOPHYSICAL MODEL OF DISCOMFORT GLARE IN BOTH OUTDOOR AND INDOOR APPLICATIONS.

Parameters

ev – illumiance at eye (lux)

retinal_irradiance(lum, pupa)[source]

adjust incident light on retina based on pupil size and focal-length

from Vissenberg et al. 2021 equation (1): (1) E_r = A_p * L / f^2 E_r: local retinal irrradiance L: field luminance

prep_kernel()[source]

construct an array to hold a kernel scaled to image resolution

psf_coef(pupd)[source]

age, pupil size and pigmentation adjusted PSF coefficients

PSF:

PSF(phi) = sum(c * f_b(phi)) f_b(phi) = b/(2π * (sin^2(phi) + b^2*cos^2(phi))^1.5) 1/steradian

LSF:

LSF(phi) = sum(c * l_b(phi)) l_b(phi) = b/(π * (sin^2(phi) + b^2*cos^2(phi))) 1/rad

based on: J.K. Ijspeert, T.J.T.P. Van Den Berg, H. Spekreijse, An improved mathematical description of the foveal visual point spread function with parameters for age, pupil size and pigmentation, Vision Research, Volume 33, Issue 1, 1993,Pages 15-20, ISSN 0042-6989, https://doi.org/10.1016/0042-6989(93)90053-Y.

apply_psf(e_r, pupd)[source]

apply human foveal point spread function

based on: J.K. Ijspeert, T.J.T.P. Van Den Berg, H. Spekreijse, An improved mathematical description of the foveal visual point spread function with parameters for age, pupil size and pigmentation, Vision Research, Volume 33, Issue 1, 1993,Pages 15-20, ISSN 0042-6989, https://doi.org/10.1016/0042-6989(93)90053-Y.

apply_eye_movement_1(e_r)[source]

eye movement gaussian adaptation model to blur image at the time- scale of adaptation response.

based on: R. A. Normann, B. S. Baxter, H. Ravindra and P. J. Anderton, “Photoreceptor contributions to contrast sensitivity: Applications in radiological diagnosis,” in IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-13, no. 5, pp. 944-953, Sept.-Oct. 1983, doi: 10.1109/TSMC.1983.6313090.

Parameters

e_r (np.array) – retinal irradiance (optical correction)

Returns

retinal irradiance (with adaptation scale movement and optical correction)

Return type

adapt_eye_movement

apply_eye_movement_2(e_r, e_g)[source]

blur image due to eye movement during direct response

from Vissenberg et al. 2021 equations (5) and (6): (5) τ = 100/(E_g * f^2)^0.12 ms tau (τ): cone integration time

(6) w = 2 * sqrt(D * τ) D = 30.0 arcmin^2 * s^-1 (occular drift) D = 250.0 (micro saccades)

Parameters
  • e_r (np.array) – retinal irradiance (optical correction)

  • e_g (float) – global retinal irrradiance

Returns

retinal irradiance (with movement and optical correction)

Return type

direct_eye_movement

local_eye_adaptation(e_r, e_g)[source]

calculate locallized eye adaptation

from Vissenberg et al. 2021 equation (4): log_10(E_a) = p * log_10(E_r) + (1-p) * log_10(E_g) E_a: adaptation illuminance p: 0.8 (indoor / moderate) - 0.9 (outdoor / strong) contrast

Parameters
  • e_r (np.array) – retinal irradiance (optical correction)

  • e_g (float) – global retinal irrradiance

Return type

local_adaptation

static cone_response(e_r, e_a)[source]

calculate local response as a fraction of maximum at current adaptation

from Vissenberg et al. 2021 equations (2) and (3): (2) V/V_m = E_r^n / (E_r^n + σ^n) V: photoreceptor response V_m: maximum response E_r: local retinal illuminance (apply w to this E_r) n: 0.74

(3) σ = (5.701055^(1/2.55) + E_a^(1/2.55))^2.55 sigma (σ): half-saturation retinal illuminance value

Parameters
  • e_r (np.array) – retinal irradiance (with movement and optical correction)

  • e_a (np.array) – local adaptation

Return type

response_ratio

field_response(vvm)[source]

receptive field response

from Vissenberg et al. 2021 equation (7):

R_RF(r) = e^(-r^2/(2σ_c^2)) / (2πσ_c^2)
          - K * e^(-r^2/(2σ_s^2)) / (2πσ_s^2)

R_RF: receptive field response r: distance to receptive field center (degrees) σ_c: gaussian width of center (0.009 (center) - 0.12 (edge FOV) degrees) σ_s: gaussian width of surround 3.5 * σ_c K: DoG balance factor 0.67

Parameters

vvm (np.array) – response_ratio (saturation)

Returns

linear, difference of gussians

Return type

response_lin

normalized_field_response(r)[source]

normalized non-linear ganglion response

from Vissenberg et al. 2021 equation (8): R_G = 1 / (1 + e^(-a * (R_lin - b))) R_G: normalized non-linear ganglion response a: slope of logistic = 22 b: 0.25

Parameters

r (np.array) – response_lin

Returns

logistic

Return type

response_log

gss(r_g)[source]

calculate minkowski sum on normalized response

from Vissenberg et al. 2021 equation (9):

(9) GSS = sum_i(R_G,i^m 𝛿_i)^(1/m) GSS: glare sensation score m: minkowski norm (4) delta (𝛿): solid angle of pixel (steradians)

Notes

fit on guth data using BCD = 2843.58 * e^(x + 1.5 * x^2) / 179 with a 2.12 degree source and 34.26 cd/m^2 background:

numpy.polynomial.Polynomial.fit(ecc, fac, 7, window=[0, 1],
                                domain=[.009, 0.12])
# where x = eccentricity (.009 -.12 from 0 to 55 degree vertical
# angle and y = 1/unweighted GSS

results:

33.12797281707965 + 2.2872877726594725·x¹ - 104.61419835147568·x² -
275.45010218009116·x³ + 1587.8255352939432·x⁴ -
2570.6813747583033·x⁵ + 1837.1741161137818·x⁶ - 499.8491902780004·x⁷