custom_tools.general_tools package

Submodules

custom_tools.general_tools.custom_arcpy module

class custom_tools.general_tools.custom_arcpy.SelectionType(*values)[source]

Bases: Enum

Summary:

Enum class that holds the strings for the selection types.

NEW_SELECTION = 'NEW_SELECTION'
ADD_TO_SELECTION = 'ADD_TO_SELECTION'
REMOVE_FROM_SELECTION = 'REMOVE_FROM_SELECTION'
SUBSET_SELECTION = 'SUBSET_SELECTION'
SWITCH_SELECTION = 'SWITCH_SELECTION'
CLEAR_SELECTION = 'CLEAR_SELECTION'
class custom_tools.general_tools.custom_arcpy.OverlapType(*values)[source]

Bases: Enum

Summary:

Enum class that holds the strings for overlap types for select by location.

INTERSECT = 'INTERSECT'
INTERSECT_3D = 'INTERSECT_3D'
INTERSECT_DBMS = 'INTERSECT_DBMS'
WITHIN_A_DISTANCE = 'WITHIN_A_DISTANCE'
WITHIN_A_DISTANCE_3D = 'WITHIN_A_DISTANCE_3D'
WITHIN_A_DISTANCE_GEODESIC = 'WITHIN_A_DISTANCE_GEODESIC'
CONTAINS = 'CONTAINS'
COMPLETELY_CONTAINS = 'COMPLETELY_CONTAINS'
CONTAINS_CLEMENTINI = 'CONTAINS_CLEMENTINI'
WITHIN = 'WITHIN'
COMPLETELY_WITHIN = 'COMPLETELY_WITHIN'
WITHIN_CLEMENTINI = 'WITHIN_CLEMENTINI'
ARE_IDENTICAL_TO = 'ARE_IDENTICAL_TO'
BOUNDARY_TOUCHES = 'BOUNDARY_TOUCHES'
SHARE_A_LINE_SEGMENT_WITH = 'SHARE_A_LINE_SEGMENT_WITH'
CROSSED_BY_THE_OUTLINE_OF = 'CROSSED_BY_THE_OUTLINE_OF'
HAVE_THEIR_CENTER_IN = 'HAVE_THEIR_CENTER_IN'
custom_tools.general_tools.custom_arcpy.resolve_enum(enum_class, value)[source]

Resolves various types of inputs to their corresponding enum member within a specified enumeration class.

This function is designed to enhance flexibility in function arguments, allowing the use of enum members, their string names, or their values interchangeably. This is particularly useful in scenarios where function parameters might be specified in different formats, ensuring compatibility and ease of use.

Parameters:
  • enum_class (Enum) – The enumeration class to which the value is supposed to belong.

  • value (str, Enum, or any) – The input value to resolve. This can be the enum member itself, its string name,

  • gracefully. (or its associated value. The function is designed to handle these various formats)

custom_tools.general_tools.custom_arcpy.select_attribute_and_make_feature_layer(input_layer, expression, output_name, selection_type='NEW_SELECTION', inverted=False)[source]
Summary:

Selects features based on an attribute query and creates a new feature layer with the selected features.

Details:
  • A temporary feature layer is created from the input_layer.

  • The selection type, defined by selection_type, is applied to this layer.

  • The selected features are stored in a new temporary feature layer

Parameters:
  • input_layer (str) – The path or name of the input feature layer.

  • expression (str) – The SQL expression for selecting features.

  • output_name (str) – The name of the output feature layer.

  • selection_type (str, optional) – Type of selection (e.g., “NEW_SELECTION”). Defaults to “NEW_SELECTION”.

  • inverted (bool, optional) – If True, inverts the selection. Defaults to False.

Example

>>> custom_arcpy.select_attribute_and_make_feature_layer(
...     input_layer=input_n100.ArealdekkeFlate,
...     expression=urban_areas_sql_expr,
...     output_name=Building_N100.data_preparation___urban_area_selection_n100___n100_building.value,
... )
'Building_N100.adding_matrikkel_as_points__urban_area_selection_n100__n100' created temporarily.
custom_tools.general_tools.custom_arcpy.select_attribute_and_make_permanent_feature(input_layer, expression, output_name, selection_type='NEW_SELECTION', inverted=False)[source]
Summary:

Selects features based on an attribute query and creates a new feature layer, then stores the selected features permanently in the specified output feature class.

Details:
  • A temporary feature layer is created from the input_layer.

  • The selection_type determines how the selection is applied to this layer. If inverted is True, the selection is inverted.

  • The selection is done on the feature layer using the expression.

  • The selected features are stored permanently in a new feature class specified by output_name using copy features.

Parameters:
  • input_layer (str) – The path or name of the input feature layer for selection.

  • expression (str) – The SQL expression used for selecting features.

  • output_name (str) – The name for the new, permanent output feature class.

  • selection_type (str, optional) – Specifies the type of selection. Defaults to “NEW_SELECTION”.

  • inverted (bool, optional) – If set to True, inverts the selection. Defaults to False.

Example

>>> custom_arcpy.select_attribute_and_make_permanent_feature(
...     input_layer=input_n100.ArealdekkeFlate,
...     expression=urban_areas_sql_expr,
...     output_name=Building_N100.adding_matrikkel_as_points__urban_area_selection_n100__permanent,
... )
'Building_N100.adding_matrikkel_as_points__urban_area_selection_n100__permanent' created permanently.
custom_tools.general_tools.custom_arcpy.select_location_and_make_feature_layer(input_layer, overlap_type, select_features, output_name, selection_type=SelectionType.NEW_SELECTION, inverted=False, search_distance=None)[source]
Summary:

Selects features based from the input layer based on their spatial relationship to the selection features and creates a new, temporary feature layer as an output.

Details:
  • Creates a feature layer from the input_layer.

  • Depending on the overlap_type, features in input_layer that spatially relate to select_features are selected.

  • If overlap_type requires a search_distance and it is provided, the distance is used in the selection.

  • The selection can be inverted if inverted is set to True.

  • The selected features are stored in a new temporary feature layer named output_name.

Parameters:
  • input_layer (str) – The path or name of the input feature layer.

  • overlap_type (str or OverlapType) – The spatial relationship type to use for selecting features.

  • select_features (str) – The path or name of the feature layer used to select features from the input_layer.

  • output_name (str) – The name of the output feature layer.

  • selection_type (SelectionType, optional) – Specifies the type of selection. Defaults to SelectionType.NEW_SELECTION.

  • inverted (bool, optional) – If True, inverts the selection. Defaults to False.

  • search_distance (str, optional) – A distance value that defines the proximity for selecting features. Required for certain overlap_type values.

Example

>>> custom_arcpy.select_location_and_make_feature_layer(
...     input_layer=Building_N100.data_preparation___polygons_that_are_large_enough___n100_building.value,
...     overlap_type=custom_arcpy.OverlapType.INTERSECT.value,
...     select_features=Building_N100.grunnriss_to_point__aggregated_polygon__n100.value,
...      output_name=Building_N100.simplify_polygons___not_intersect_aggregated_and_original_polygon___n100_building.value,
...     inverted=True,
... )
'grunnriss_to_point__intersect_aggregated_and_original__n100' created temporarily.
custom_tools.general_tools.custom_arcpy.select_location_and_make_permanent_feature(input_layer, overlap_type, select_features, output_name, selection_type=SelectionType.NEW_SELECTION, inverted=False, search_distance=None)[source]
Summary:

Selects features based from the input layer based on their spatial relationship to the selection features and creates a new, permanent feature class as an output.

Details:
  • Initiates by creating a temporary feature layer from input_layer.

  • Applies a spatial selection based on overlap_type between the input_layer and select_features.

  • Utilizes search_distance if required by the overlap_type and provided, to define the proximity for selection.

  • The selection can be inverted if inverted is set to True, meaning it will select all features not meeting the spatial relationship criteria.

  • The selected features are stored permanently in a new feature class specified by output_name.

  • Cleans up by deleting the temporary feature layer to maintain a tidy workspace.

Parameters:
  • input_layer (str) – The path or name of the input feature layer.

  • overlap_type (str or OverlapType) – The type of spatial relationship to use for feature selection.

  • select_features (str) – The feature layer used as a reference for spatial selection.

  • output_name (str) – The name for the new, permanent feature class to store selected features.

  • selection_type (SelectionType, optional) – The method of selection to apply. Defaults to SelectionType.NEW_SELECTION.

  • inverted (bool, optional) – If set to True, the selection will be inverted. Defaults to False.

  • search_distance (str, optional) – The distance within which to select features, necessary for certain types of spatial selections like WITHIN_A_DISTANCE.

Example

>>> custom_arcpy.select_location_and_make_permanent_feature(
...     input_layer=Building_N100.data_selection___begrensningskurve_n100_input_data___n100_building.value,
...     overlap_type=OverlapType.WITHIN_A_DISTANCE.value,
...     select_features=Building_N100.polygon_propogate_displacement___building_polygons_after_displacement___n100_building.value,
...     output_name=Building_N100.polygon_resolve_building_conflicts___begrensningskurve_500m_from_displaced_polygon___n100_building.value,
...     search_distance="500 Meters",
... )
'polygon_propogate_displacement___begrensningskurve_500_m_from_displaced_polygon___n100_building' created permanently.
custom_tools.general_tools.custom_arcpy.apply_symbology(input_layer, in_symbology_layer, output_name, grouped_lyrx=False, target_layer_name=None)[source]
Summary:

Applies symbology from a specified lyrx file to an input feature layer and saves the result as a new lyrx file.

Details:
  • Creates a temporary feature layer from the input_layer.

  • Applies symbology to the temporary layer using the symbology defined in in_symbology_layer.

  • The symbology settings are maintained as they are in the symbology layer file.

  • Saves the temporary layer with the applied symbology to a new layer file specified by output_name.

  • Deletes the temporary layer to clean up the workspace.

  • A confirmation message is printed indicating the successful creation of the output layer file.

Parameters:
  • input_layer (str) – The path or name of the input feature layer to which symbology will be applied.

  • in_symbology_layer (str) – The path to the layer file (.lyrx) containing the desired symbology settings.

  • output_name (str) – The name (including path) for the output layer file (.lyrx) with the applied symbology.

  • grouped_lyrx (bool, optional) – If True, the symbology is applied to the entire layer group. Defaults to False.

  • target_layer_name (str, optional) – The name of the target layer within the layer group to which symbology is applied. Defaults to None.

Example

>>> custom_arcpy.apply_symbology_to_the_layers(
...     input_layer=Building_N100.point_displacement_with_buffer___merged_buffer_displaced_points___n100_building.value,
...     in_symbology_layer=config.symbology_n100_grunnriss,
...     output_name=Building_N100.polygon_resolve_building_conflicts___building_polygon___n100_building_lyrx.value,
... )
'apply_symbology_to_layers__building_polygon__n100__lyrx.lyrx file created.'

custom_tools.general_tools.file_utilities module

class custom_tools.general_tools.file_utilities.FeatureClassCreator(template_fc, input_fc, output_fc, object_type='POLYGON', delete_existing=False)[source]

Bases: object

run()[source]

Executes the process of creating a new feature class and appending geometry from the input feature class.

custom_tools.general_tools.file_utilities.create_feature_class(template_feature: str, new_feature: str) None[source]

Creates a new feature from a templete feature class

custom_tools.general_tools.file_utilities.delete_feature(input_feature)[source]

Deletes a feature class if it exists.

custom_tools.general_tools.file_utilities.compare_feature_classes(feature_class_1, feature_class_2)[source]
custom_tools.general_tools.file_utilities.reclassify_value(input_table: str, target_field: str, target_value: str, replace_value: str, reference_field: str = None, logic_format: str = 'PYTHON3') None[source]
custom_tools.general_tools.file_utilities.deleting_added_field_from_feature_to_x(input_file_feature: str = None, field_name_feature: str = None) None[source]
custom_tools.general_tools.file_utilities.get_all_fields(input_fields, *added_field_sets)[source]

Combines an input fields list with any number of additional field sets. Assumes each added field set is a list of [field_name, field_type] pairs.

custom_tools.general_tools.file_utilities.count_objects(input_layer)[source]
custom_tools.general_tools.file_utilities.delete_fields_if_exist(feature_class_path: str, fields_to_delete: list[str]) None[source]

Deletes specified fields from a feature class if they exist.

Parameters:
  • feature_class_path (str) – Path to the feature class.

  • fields_to_delete (list[str]) – Fields to delete if present.

custom_tools.general_tools.file_utilities.write_dict_to_json(path: str, dict_data: dict) None[source]

Writes a dictionary to a specified JSON file path.

custom_tools.general_tools.file_utilities.feature_has_rows(feature: str | GdbFilePath | InjectIO) bool[source]
custom_tools.general_tools.file_utilities.print_feature_info(file_path: GdbFilePath | InjectIO, description: str) None[source]

Helper for consistent print formatting of feature diagnostics.

custom_tools.general_tools.geometry_tools module

custom_tools.general_tools.graph module

custom_tools.general_tools.line_to_buffer_symbology module

class custom_tools.general_tools.line_to_buffer_symbology.LineToBufferSymbology(line_to_buffer_config: LineToBufferSymbologyKwargs)[source]

Bases: object

selecting_different_road_lines(sql_query: str, selection_output_name: str)[source]
What:

Selects road lines based on the provided SQL query and creates a feature layer or a permanent feature class depending on the write_work_files_to_memory flag.

Parameters:
  • sql_query (str) – The SQL query string used to select the road lines.

  • selection_output_name (str) – The name for the output feature layer or file.

creating_buffer_from_selected_lines(selection_output_name, buffer_width, buffer_output_name)[source]

Creates a buffer around the selected road lines.

static merge_buffers(buffer_output_names, merged_output_name)[source]

Merges multiple buffer outputs into a single feature class.

process_query_buffer_width_pair(sql_query, original_width, counter)[source]

Processes a SQL query to select road lines and create buffers based on buffer width.

process_queries()[source]

Processes all SQL queries to create buffers and handle merging.

run()[source]

custom_tools.general_tools.param_utils module

custom_tools.general_tools.param_utils.to_kwargs_or_empty(obj: Mapping[str, Any] | Any | None) Dict[str, Any][source]

Coerce params to a **kwargs-compatible dict. - None -> {} - dataclass -> asdict(…) - Mapping -> must have str keys

custom_tools.general_tools.param_utils.ensure_dataclass_list(params: Any | Sequence[Any] | None) List[Any][source]
Accept only:
  • None -> []

  • dataclass instance -> [instance]

  • (list|tuple) of dataclass instances -> list(instances)

Everything else -> TypeError

custom_tools.general_tools.param_utils.validate_positional_arity(fn: Any, n: int, *, allow_varargs=True) None[source]

Ensure fn can accept n positional args (ignoring ‘self’).

custom_tools.general_tools.param_utils.payload_log(params: Any | Sequence[Any] | None, jsonifier) List[Dict[str, Any]] | None[source]

Return a list of dataclass-log dicts (or None).

custom_tools.general_tools.partition_iterator module

class custom_tools.general_tools.partition_iterator.PartitionIterator(partition_io_config: PartitionIOConfig, partition_method_inject_config: MethodEntriesConfig, partition_iterator_run_config: PartitionRunConfig, work_file_manager_config: WorkFileConfig)[source]

Bases: object

Partitioned processing pipeline for large vector datasets with context-aware selection and configurable, injected methods.

# Overview This iterator splits work into cartographic partitions and processes only the features relevant to each partition. It distinguishes between:

  • processing inputs: primary datasets to process, and

  • context inputs: supporting datasets selected within a configurable distance of the processing features.

For each partition, the iterator:
  1. Selects processing features (center-in partition, plus optional near-by radius).

  2. Selects context features (within the same radius of the partition).

  3. Resolves and executes injected methods (functions or class methods) whose parameters may include injected I/O paths.

  4. Appends the partition’s outputs to the configured final outputs.

  5. Logs iteration catalogs, method parameters, attempts, and errors to a documentation directory.

# Catalogs The iterator maintains three dictionaries:

  • input_catalog: per object (dataset name) stores metadata and the global input paths; populated from PartitionIOConfig.

  • output_catalog: per object stores final output paths; populated from PartitionIOConfig.

  • iteration_catalog: per partition, per object, stores iteration-specific paths (e.g., the selected subset path, produced outputs) and counts.

The following keys are used inside catalogs:
  • INPUT_TYPE_KEY / DATA_TYPE_KEY: metadata about each object’s role and data type.

  • INPUT_KEY: the canonical tag for the active input path of an object.

  • COUNT: number of selected features this iteration.

  • DUMMY: a dummy feature path used to keep downstream logic stable when a particular object has no features in a partition.

# Injection & method execution Injected method configs (functions or class methods) may include InjectIO(object, tag) placeholders that refer to a catalog object (dataset key) and a tag (path key).

Path resolution rules per partition: - InjectIO(obj, “input”): always resolves to the selected features for this partition

(never the global dataset), ensuring your method receives only the slice relevant to the current partition.

  • InjectIO(obj, some_tag) where some_tag != “input” resolves to a **new, unique

    iteration-scoped path** under the iteration work directory. If no such entry exists yet for (obj, some_tag), the iterator creates and registers it in the iteration_catalog[obj][some_tag]. Your injected method is expected to write to it.

  • You may introduce new tags for an existing object (e.g., “buffer”, “cleaned”)

    or even new objects via InjectIO(new_object, new_tag). The iterator will allocate paths and track them in iteration_catalog as those tags/objects appear in params.

Resolution & execution flow: 1. resolve_injected_io_for_methods(…) deep-walks params (dataclasses, dicts, lists,

tuples, sets), replacing every InjectIO with a concrete, partition-scoped path, and returns a resolved config.

  1. execute_injected_methods_with_retry(…) runs the resolved methods with retry

    semantics. For class methods, constructor vs method kwargs are split automatically (only names present on __init__ are sent to the constructor).

  2. Each attempt produces JSON logs (params, status, exceptions with tracebacks). On

    success, a consolidated method_log_{partition_id}.json is written. On failure, per-attempt logs live under error_logs/error_{partition_id}/attempt_{n}_error.json.

Notes: - The iterator does not implicitly copy data into non-“input” targets; it only

allocates file paths. Your injected method is responsible for creating/writing those outputs.

  • Using “input” guarantees you receive the current partition’s selection, not the

    global source.

# Partition creation & selection Cartographic partitions are created from all configured processing inputs. For each partition:

  • Processing features are selected by “center in partition” and optionally augmented with “near partition” features (radius = context_radius_meters), with a PARTITION_FIELD set to 1 (center-in) or 0 (nearby) to preserve provenance.

  • Context features are selected by distance to the same partition (using the configured radius).

# Logging (documentation directory) At the start of run(), the configured documentation_directory is cleared and recreated (with safety checks). The iterator writes:

  • input_catalog.json, output_catalog.json (initial state),

  • iteration_catalog/catalog_{partition_id}.json (per-partition selection),

  • method_logs/method_log_{partition_id}.json (final success per partition),

  • error_logs/error_{partition_id}/attempt_{n}_error.json (per attempt, on error),

  • error_log.json (retry summary across partitions).

# Args (configs) - partition_io_config (core_config.PartitionIOConfig): Declares input objects

(processing/context) and output objects (vector outputs) with their paths and data types. Also provides documentation_directory.

  • partition_method_inject_config (core_config.MethodEntriesConfig): The list of injected methods (functions or class methods) with their parameter configs. Any InjectIO placeholders will be resolved per partition.

  • partition_iterator_run_config (core_config.PartitionRunConfig): Runtime knobs: context radius (meters), max elements per partition, partition method (“FEATURES” or “VERTICES”), object ID field, and whether to auto-optimize the partition feature count.

  • work_file_manager_config (core_config.WorkFileConfig): Controls where and how temporary/iteration/persistent paths are generated.

# Side effects - Creates and deletes intermediate feature classes and layers. - Writes JSON logs under documentation_directory (safe-guarded). - Appends to final outputs as partitions complete. - Adds then removes (via cleanup) the PARTITION_FIELD as needed.

# Raises - ValueError for duplicate input objects. - RuntimeError when no valid partition size can be found (if optimization is enabled). - Any exception thrown inside injected methods is captured, logged, and retried up to

the configured maximum. If all retries fail, the exception is re-raised.

# Example (high level)
iterator = PartitionIterator(

partition_io_config=io_config, partition_method_inject_config=methods_config, partition_iterator_run_config=run_config, work_file_manager_config=wm_config,

) iterator.run()

INPUT_TYPE_KEY = 'input_type'
DATA_TYPE_KEY = 'data_type'
INPUT_KEY = 'input'
DUMMY = 'dummy'
COUNT = 'count'
PARTITION_FIELD = 'partition_selection_field'
resolve_partition_input_config(entries: List[ResolvedInputEntry], target_dict: Dict[str, Dict[str, str]]) None[source]

Add resolved input entries to target_dict.

Ensures each object appears only once; raises on duplicates.

resolve_partition_output_config(entries: List[ResolvedOutputEntry], target_dict: Dict[str, Dict[str, str]]) None[source]

Add resolved output entries to target_dict.

Ensures each (object, tag) pair is unique; raises on duplicate tags within the same object.

delete_final_outputs()[source]

Deletes all final output feature classes if they exist.

delete_iteration_files(*file_paths)[source]

Deletes multiple feature classes or files from a list.

create_dummy_features(tag: str) None[source]

Create dummy feature classes for all objects that have a valid path under the given tag. Used to provide stable placeholder inputs when a partition produces no features.

update_empty_object_tag_with_dummy_file(object_key: str, tag: str) None[source]

Replaces the value for the given tag with the dummy path if available for the object_key.

write_documentation(name: str, dict_data: dict, sub_dir: str | None = None) None[source]

Writes a JSON file to documentation_directory or its subdirectory. Ensures the destination directory exists.

update_max_partition_count() None[source]

Determine the maximum OBJECTID for partitioning.

prepare_input_data()[source]

Prepare all inputs for partitioning.

  • Processing inputs: counted and tagged with a PARTITION_FIELD.

  • Context inputs: either counted directly (if search_distance <= 0)

or filtered to features within the search radius of processing inputs.

select_partition_feature(iteration_partition, object_id)[source]

Selects partition feature based on OBJECTID.

process_single_processing_input(object_key: str, input_path: str, iteration_partition: str, partition_id: int) bool[source]

Select and prepare a single processing input for one partition.

How: - Selects features whose center lies within the partition. - Marks them with PARTITION_FIELD = 1 and copies to an iteration-scoped dataset. - If search_distance > 0, also selects nearby features:

  • Includes features within the search radius but not center-in,

  • Marks them with PARTITION_FIELD = 0,

  • Appends them to the same iteration dataset.

  • Updates iteration_catalog with path and feature count.

  • Creates a dummy feature if no features are found.

Returns:

True if the partition contains any features for this object, False otherwise.

Return type:

bool

process_all_processing_inputs(iteration_partition: str, partition_id: int) bool[source]
process_single_context_input(object_key: str, input_path: str, iteration_partition: str, partition_id: int) None[source]

Select and prepare a single context input for one partition.

How: - Selects features within search_distance of the partition geometry. - Writes the selection to an iteration-scoped dataset. - Updates iteration_catalog with feature count and path. - If no features are found, assigns a dummy feature path.

Side effects:

Creates/deletes temporary feature classes and updates iteration_catalog.

process_all_context_inputs(iteration_partition: str, partition_id: int) None[source]

Process all configured context inputs for one partition.

Calls process_single_context_input for each context object and records results in iteration_catalog.

track_iteration_time(object_id: int, inputs_present: bool) None[source]

Tracks runtime and estimates remaining time based on iterations with inputs. Prints current time, elapsed runtime, and estimated remaining runtime.

resolve_injected_io_for_methods(method_entries_config: MethodEntriesConfig, partition_id: int) MethodEntriesConfig[source]

Inject concrete paths into each method entry by resolving InjectIO objects. Returns a new MethodEntriesConfig with fully resolved params.

resolve_param_injections(method_config: Any, partition_id: int) Any[source]

Recursively resolve InjectIO instances in any nested structure. Supports dicts, lists, tuples, and sets.

resolve_inject_entry(inject: InjectIO, partition_id: int) str[source]

Resolve a single InjectIO placeholder to a concrete path for this partition.

execute_injected_methods(method_entries_config: MethodEntriesConfig, partition_id: int, attempt: int) Dict[str, Any][source]

Execute a fully resolved set of injected methods for one partition/attempt.

For each entry: - If it’s a class method: split kwargs into constructor vs. method args,

instantiate the class, then call the method.

  • If it’s a function: call it with the provided kwargs.

Logging: - Builds an in-memory execution_log with:

  • per-entry raw params (JSON-safe), split class/method params (for classes),

  • status (“ok” or “error”), and full exception info (type/message/traceback) on error.

  • On any exception, stores the partial log in self._last_injected_log and re-raises;

    the retry layer is responsible for persisting per-attempt error logs.

Parameters:
  • method_entries_config – The resolved (no remaining InjectIO) method entries.

  • partition_id – Current partition identifier (for log context).

  • attempt – 1-based attempt number (for log context).

Returns:

The complete per-attempt execution log when all entries succeed.

Return type:

Dict[str, Any]

Raises:

Exception – Re-raises the first failure after recording it in self._last_injected_log.

execute_injected_methods_with_retry(partition_id: int)[source]

Execute injected methods for a partition with retries and structured logging.

Flow per attempt: 1) Reset self._last_injected_log to ensure clean state. 2) Resolve InjectIO placeholders to concrete, partition-scoped paths.

  • If resolution fails, write an error attempt log with stage=”resolve”.

  1. Run execute_injected_methods(…).
    • On success: write method_logs/method_log_{partition_id}.json and return.

    • On failure: write error_logs/error_{partition_id}/attempt_{n}_error.json,

    increment self.error_log[partition_id], and retry until max_retries.

Parameters:

partition_id – Current partition identifier.

Raises:
  • Exception – Re-raises the last error after exhausting retries; also writes

  • error_log.json

append_iteration_outputs_to_final(partition_id: int) None[source]

Appends all valid outputs for the current iteration to their final output paths.

Skips any objects marked as dummy and ensures only non-empty, valid inputs are appended.

cleanup_helper_fields() None[source]

Delete the helper field PARTITION_FIELD from: - All final output feature classes. - All processing input feature classes (since it was injected for partitioning).

Ensures that only clean data structures remain after the workflow.

partition_iteration()[source]

Process every cartographic partition end-to-end.

Workflow (per partition): 1) Reset iteration state and select the partition geometry. 2) Select processing inputs (center-in; optionally add near-by features).

  • If no processing features are present, skip this partition.

  1. Select context inputs within the configured search radius.

  2. Execute injected methods with retry and structured logging.

  3. Persist the iteration catalog and append valid outputs to final outputs.

Raises: - Propagates any unhandled exception from injected methods after retries are exhausted.

run()[source]

Orchestrate the full pipeline: preparation → partitioning → iteration → cleanup.

Steps: 1) Reset the documentation directory (with safety checks) and write output_catalog.json. 2) Data preparation:

  • Optionally delete existing final outputs.

  • Prepare processing and context inputs (add PARTITION_FIELD, pre-filter context).

  • Create per-object dummy features.

  • Write input_catalog.json.

  1. Partitioning:
    • Determine feature count (optimize if enabled) and create cartographic partitions.

  2. Iteration:
    • Call partition_iteration() to process all partitions, execute injected methods,

    and append per-partition results to final outputs.

  3. Cleanup & logs:
    • Remove helper fields from final outputs (e.g., PARTITION_FIELD).

    • Delete persistent temp files.

    • Write aggregated error_log.json.

custom_tools.general_tools.polygon_processor module

custom_tools.general_tools.print_logger module

class custom_tools.general_tools.print_logger.LogPath[source]

Bases: object

N100Building = 'C:\\path\\to\\folder\\you\\want\\your\\outputs\\in\\ag_outputs\\n100\\building\\general_files\\app.log'
N100River = 'C:\\path\\to\\folder\\you\\want\\your\\outputs\\in\\ag_outputs\\n100\\river\\general_files\\app.log'
custom_tools.general_tools.print_logger.setup_logger(scale, object_type, log_directory='logs', filename='app.log')[source]

WORK IN PROGRESS NOT DONE Creates a logger for the specified scale and object type. Log files will be stored in a directory structure matching the scale and object type.

Parameters:
  • scale – The scale for the log (e.g., ‘n100’).

  • object_type – The object type for the log (e.g., ‘bygning’).

  • log_directory – Base directory for logs. Defaults to ‘logs’.

  • filename – Default filename for the log. Defaults to ‘app.log’.

Returns:

Logger instance for the specified scale and object type.

custom_tools.general_tools.study_area_selector module

class custom_tools.general_tools.study_area_selector.StudyAreaSelector(input_output_file_dict: dict, selecting_file: str, select_local: bool = False, selecting_sql_expression: str = None)[source]

Bases: object

select_study_area()[source]
use_global_files()[source]
delete_working_files(*file_paths)[source]

Deletes multiple feature classes or files.

delete_feature_class(feature_class_path)[source]

Deletes a feature class if it exists.

run()[source]

Module contents