Example 0: The Simplest Neuroptimiser

This example demonstrates how to use the Neuroptimiser library to solve a dummy optimisation problem.

1. Setup

Import minimal necessary libraries.

import matplotlib.pyplot as plt
import numpy as np

from neuroptimiser import NeurOptimiser

2. Quick problem and optimiser setup

We define a simple optimisation problem with a fitness function and bounds.

problem_function    = lambda x: np.linalg.norm(x)
problem_bounds      = np.array([[-5.0, 5.0], [-5.0, 5.0]])

Then, we instantiate the Neuroptimiser with the default configurations.

optimiser = NeurOptimiser()
# Show the overall configuration parameters of the optimiser
print("DEFAULT CONFIG PARAMS:\n", optimiser.config_params, "\n")
print("DEFAULT CORE PARAMS:\n", optimiser.core_params)
DEFAULT CONFIG PARAMS:
 {'num_iterations': 300, 'num_neighbours': 1, 'seed': 69, 'function': None, 'search_space': array([[-1,  1],
       [-1,  1]]), 'unit_topology': '2dr', 'core_params': {}, 'num_agents': 10, 'neuron_topology': '2dr', 'num_dimensions': 2, 'spiking_core': 'TwoDimSpikingCore'} 

DEFAULT CORE PARAMS:
 [{'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}, {'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}, {'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}, {'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}, {'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}, {'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}, {'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}, {'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}, {'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}, {'coeffs': 'random', 'seed': None, 'approx': 'rk4', 'thr_alpha': 1.0, 'thr_k': 0.05, 'ref_mode': 'pg', 'thr_max': 1.0, 'spk_cond': 'fixed', 'dt': 0.01, 'hs_operator': 'fixed', 'name': 'linear', 'thr_min': 1e-06, 'max_steps': 100, 'noise_std': 0.1, 'alpha': 1.0, 'thr_mode': 'diff_pg', 'spk_alpha': 0.25, 'is_bounded': False, 'hs_variant': 'fixed'}]

3. Optimisation process

We proceed to solve the optimisation problem using the solve method of the NeurOptimiser process. In this example, we enable the debug mode to get more detailed output during the optimisation process.

optimiser.solve(
    obj_func=problem_function,
    search_space=problem_bounds,
    debug_mode=True
)
[neuropt:log] Debug mode is enabled. Monitoring will be activated.
[neuropt:log] Parameters are set up.
[neuropt:log] Initial positions and topologies are set up.
[neuropt:log] Tensor contraction layer, neighbourhood manager, and high-level selection unit are created.
[neuropt:log] Population of nheuristic units is created.
[neuropt:log] Connections between nheuristic units and auxiliary processes are established.
[neuropt:log] Monitors are set up.
[neuropt:log] Starting simulation with 300 iterations...
... step: 0, best fitness: 2.025650978088379
... step: 30, best fitness: 1.113487958908081
... step: 60, best fitness: 0.15552423894405365
... step: 90, best fitness: 0.1322910338640213
... step: 120, best fitness: 0.024050353094935417
... step: 150, best fitness: 0.024050353094935417
... step: 180, best fitness: 0.0138082941994071
... step: 210, best fitness: 0.013055730611085892
... step: 240, best fitness: 0.013055730611085892
... step: 270, best fitness: 0.013055730611085892
... step: 299, best fitness: 0.013055730611085892
[neuropt:log] Simulation completed. Fetching monitor data... done
(array([-0.00682959,  0.0111269 ]), array([0.01305573]))

(Optional) 4. Results processing and visualisation

We process the results obtained from the optimiser and visualise the absolute error in fitness values over the optimisation steps.

# Recover the results from the optimiser
fp              = optimiser.results["fp"]
fg              = optimiser.results["fg"]
positions       = np.array(optimiser.results["p"])
best_position   = np.array(optimiser.results["g"])
v1              = np.array(optimiser.results["v1"])
v2              = np.array(optimiser.results["v2"])

# Calculate the absolute error in fitness values
efp             = np.abs(np.array(fp))
efg             = np.abs(np.array(fg))

# Convert the spikes to integer type
spikes          = np.array(optimiser.results["s"]).astype(int)

# Print some minimal information about the results
print(f"fg: {fg[-1][0]:.4f}, f*: {0.0:.4f}, error: {efg[-1][0]:.4e}")
print(f"norm2(g - x*): {np.linalg.norm(best_position[-1]):.4e}")
print(f"{v1.min():.4f} <= v1 <= {v1.max():.4f}")
print(f"{v2.min():.4f} <= v2 <= {v2.max():.4f}")
fg: 0.0131, f*: 0.0000, error: 1.3056e-02
norm2(g - x*): 1.3056e-02
-1.1394 <= v1 <= 1.2176
-0.8110 <= v2 <= 0.9703
fig, ax = plt.subplots(figsize=(6.9, 6.9*0.618))

plt.plot(efp, color="silver", alpha=0.5)
plt.plot(np.max(efp, axis=1), '--', color="red", label=r"Max.")
plt.plot(np.average(efp, axis=1), '--', color="black", label=r"Mean")
plt.plot(np.median(efp, axis=1), '--', color="blue", label=r"Median")
plt.plot(efg, '--', color="green", label=r"Min.")

plt.xlabel(r"Step, $t$")
plt.ylabel(r"Abs. Error, $\varepsilon_f$")

lgd = plt.legend(ncol=2, loc="lower left")

plt.xscale("log")
plt.yscale("log")

ax.patch.set_alpha(0)
fig.tight_layout()
../_images/5052fc8ce436d521d7c73c91fbafa89d9bbef6f811e73f4f0b6ccd899b5f3628.png