Issue 
Int. J. Simul. Multidisci. Des. Optim.
Volume 11, 2020



Article Number  16  
Number of page(s)  13  
DOI  https://doi.org/10.1051/smdo/2020008  
Published online  07 August 2020 
Research Article
Modified election algorithm in hopfield neural network for optimal random k satisfiability representation
^{1}
School of Mathematical Sciences, Universiti Sains Malaysia, Pualau Penang, 11800 USM, Penang, Malaysia
^{2}
Department of Mathematics, Federal University DutsinMa, Nigeria
^{3}
Department of Mathematics, Isa Kaita College of Education, DutsinMa, Nigeria
^{*} email: zeeham4u2c@yahoo.com
Received:
4
April
2020
Accepted:
20
June
2020
Election algorithm (EA) is a novel metaheuristics optimization model motivated by phenomena of the sociopolitical mechanism of presidential election conducted in many countries. The capability and robustness EA in finding an optimal solution to optimization has been proven by various researchers. In this paper, modified version of EA has been utilized in accelerating the searching capacity of Hopfield neural network (HNN) learning phase for optimal randomkSAT logical representation (HNNR2SATEA). The utility of the proposed approach has been contrasted with the current standard exhaustive search algorithm (HNNR2SATES) and the newly developed algorithm HNNR2SATICA. From the analysis obtained, it has been clearly shown that the proposed hybrid computational model HNNR2SATEA outperformed other existing model in terms of global minima ratio (Zm), mean absolute error (MAE), Bayesian information criterion (BIC) and execution time (ET). The finding portrays that the MEA algorithm surpassed the other two algorithms for optimal randomkSAT logical representation.
Key words: Election algorithm / Hopfield neural networks / exhaustive search / random satisfiability / logic programming
© H. Abubakar et al., published by EDP Sciences, 2020
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introductions
Artificial neural networks (ANNs) belong to the family of the computational architecturalbased model, viewed as equivalent to a brains' programming by imitating its design and attempts to mimic nervous system activity through which information is handled by the brains [1]. It consists of many basic processing components (called artificial neurons) that are loosely based on biological neurons. This learns about the relations between the processing elements and the system parameters through a cycle of adjustment. It contains several interconnected neurons that have a specific synaptic weight that essentially influences how much the neuron output can influence the input to the next neurons. The information is preserved in the weight and learned by the network. This process is done by several known algorithms. There has been an enormous development in computational technology and increase interest in the possible application and use of artificial intelligence in various domains of applications [2].
Hopfield neural networks (HNNs) are regarded as a specific type of neural networks that can preserve certain memories or trends in a way similar to the human brain which can be recovered if only partial, distorted or noisy information is presented to the network [3]. This brain capacity is often referred to as memory that can be associative or contentbased i.e. content addressable memory (CAM). Hopfield proposed that stable patterns/memories are energy minima of HNN. The neural model initially proposed by Hopfield as associative memory either in its original version “firstorder Hopfield networks” or its modified/extended version of high order [4]. The dynamics of activation function that decreases as the network evolves dynamically. Consequently, the network is expected to reach a state of equilibrium that does not correspond to an optimization problem [4]. Significant activity in this field is the quest for evolutionary metaheuristics algorithm techniques to push the network out of local minima and carry it to a global minimum. These limitations motivated researchers to propose different hybrid optimization technique to speed the accuracy and stability of HNN in solving various optimization problems. A review of HNN for solving mathematical programming problems has been presented [5]. The studies described HNN as one of the major neural network (NN) for solving various optimization or mathematical programming (MP) problems. It introduced HNN structure on an electronic circuit for an online solver with such a parallel distribution process. A strategy for solving an economic power dispatch problem has been introduced with a piecewise quadratic cost function using the HNN in [6]. The technique suggested is much easier and the results are very similar to those of the number system. A method based on fused imagery as an additional source of information for superresolution mapping using the HNN approach has been presented [7]. The study incorporated forward and inverse models in the HNN to support a new reflectance constraint added to the energy function. An algorithm named HNN Spatiotemporal data fusion model (HNNSPOT) has been developed [8]. Using the HNN optimization principle for a fusion of Spatiotemporal images. The HNNSPOT tackles the issue in particular because the fine resolution and coarse resolution images are collected over the same regional area from specific satellite overpassing periods. HNNSPOT provides a fairly new method in a fusion of Spatiotemporal data. The Hopfield Lagrange Network (HLN) approach is used to solve the problem of optimum load loading (OLD) in the dynamic electric market [9]. The duty of the HLN in this study was conducted to determine the Optimum productive power output of thermal generation units to optimize the advantages of producing energy from all the available equipment. A Modified HNN algorithm (MHNNA) with remote sensing imaging was used to identify proportions of total suspended solids (TSS) in coastal waters of Langkawi Island, Malaysia [10]. The modification was made to allow the HNN to translate and identify colour images from satellites. For the problem of frequency assignment (FAP) in satellite communications, a hybrid Hopfieldsimulated annealing algorithm (HopSA) is introduced [11]. The goal of the NPcomplete problem is to reduce the cochannel conflict between satellite communications systems by rearranging the frequency assignment, so the systems can satisfy the demands. The main challenge in these studies is to implement the optimal method of learning in HNN.
Boolean Satisfiability (SAT) is a wellknown decision problem, that consists in deciding whether a propositional logic formula can be satisfied given suitable value assignments to the variables of the formula. SAT is a widely used modelling framework for solving combinatorial problems. SAT is also a wellknown NPcomplete decision problem [11]. Many NP problems that can be transformed and represented into SAT. Besides the practical application, SAT has also influenced many related decision and optimization problems, which will be referred to as extensions of SAT. Many practical applications mathematics and engineering fall into the category of optimization that can easily be transformed into SAT. The performance improvements made to SAT solvers since the mid 90 s motivated their application to a wide range of practical applications, this include, cross talk noise prediction [12], Modulo theory [13] and convex optimization [14] and formula verification [15]. Additional successful examples of practical applications of SAT include knowledge compilation [16] as well as software model checking [17].
Advances neuralsymbolic computation as a fundamental approach for applied machine learning and reasoning have been pioneered in [18]. The integration explains the usefulness of the technique by outlining the main characteristics of the methodology, the main convergence of neural processing with the symbolic representation of intelligence and reasoning that allows for the development of explainable AI systems. The main idea of logic as a programming language in neural networks was introduced in [19] to serve, represent and interpret a problem. The motivating force behind it is the notion that a single formalism is adequate both for logic and computing, and that it subsumes computation. An algorithm may be known to consist of a logic component that defines the intelligence to be used in optimizing a given problem, and perhaps a control component that decides the problemsolving techniques by which that knowledge is utilized. Therefore, from the perspective of combinatorial optimization, it can be viewed as a problem. The logic programming idea was extended incorporating the competent logical mapping system or propositional knowledge through an asymmetric network of connectionists [20]. The proposed new symmetric connectionist network (SCN) has therefore attracted the interest of AI and computational scientist communities to combine the advantage of both the logic program and neural network as a single network [20]. Wan Abdullah developed a learning method which is used in calculating the synaptic weight of the HNN corresponds to the suggested logical system, and the technique is still applicable particularly when working with an RNN [21]. The Horn logic programming was upgraded by integrating the efficient relaxation method to generate the optimum final neuron states in [21]. The Boltzmann machine based on the stochastic approach for HNN model has been proposed in [22], which reduced neurons oscillations during the HNN model recovery phase. The notion of logic programming merged with radial basis function neural network as a single model for computation has been achieved in [23]. The result reveals that RBFNN in logical programming inevitably worked well. Consequently, in Mean Field Theory depict HNN logic programming flexibility has been proposed in [18,24]. A comprehensive investigation of the potential effect of systematic Satisfiability programming as a logical rule to optimize the output weights and parameters in RBFNN [25]. The study investigated the effectiveness of 2SAT logical rule in reducing the computational burden for RBFNN by obtaining the parameters in RBFNN. Successful implementation of kSAT as a logical representation in HNN has been proposed [26–27].
The benefit of metaheuristics algorithms in HNN is that a network can be inferred at different stages such as weight training and adjustment, system adaptation for determining the number of layers, node transfer functions, retrieval phase and learning rules. MEA has emerged a new evolutionary metaheuristics algorithm that has been applied in finding an optimal solution to computational optimization and engineering application [28–31]. Unlike many other metaheuristics that are mainly inspired by swam or natural evolutionary process. EA is adopting the sociopolitical process of presidential elections conducted in many countries. The synthesis of metaheuristics and the HNN have been proven effective in carrying out satisfiability as a logic programming [18,28,29]. Unfortunately, to the best of our knowledge, no effort combined the global optimization ability of Election algorithm to facilitate the training phase of HNN in carrying out random satisfiability optimization (randomkSAT). Therefore, we proposed a new hybrid computational model by incorporating hybrid election algorithm (HEA) to foster the learning phase of HNN in optimizing random ksatisfiability logic representation in attaining better accuracy, sensitivity, and robustness for higherorder networks. The contributions of the present study include the following:

To integrate the new logical representation namely randomkSAT in HNN.

To implement newly proposed EA incorporated with random kSAT into discrete Hopfield Neural Network (HNNRkSATEA).

To explore the feasibility of HNNRkSATEA model and measure its performance in term of accuracy with the existing models for HNN.

To establish a comparison of the HNNRkSATEA with two existing HNN models.
The proposed hybrid computational model will be beneficial to mathematics and computational science communities by providing an alternative method in carrying combinatorial optimization problem (TSP, scheduling problem, graph colouring problem, timetabling problems, etc), mathematical logic, mathematical approximation, the Lie group integrator, computational algebra and computational sciences. Our results explored that the new proposed hybrid computational model improves the learning phase of HNNRkSATEA by demonstrating the best performance compared to other existing work.
This paper has been organized as follows. In Section 2, random ksatisfiability problems have been presented which include random satisfiability in discrete Hopfield neural network. Section 3, Neurosearching techniques employed in this research which include, Exhaustive Search Algorithm (ES) and modified version of Election Algorithm (EA) have been discussed. Section 4, presented an experimental model setup. Finally, Sections 5 and 6 enclose graphs of the simulated results, discussion, and the conclusion of this exploration.
2 Mathematical model of discrete Hopfield neural network
Hopfield network to model biological memory, proposed the socalled, Hopfield Associative Memory (HAM). Such an HNN model is based on the McCullochPitts paradigm of artificial neuron [4]. In such a HANN, the state space constitutes the symmetric unit hypercube (i.e. the components of state vectors are bipolar [1,‑1]. The associated convergence theorem ensures that in the serial mode of operation, the initial state converges to a stable state and in the fully parallel mode of operation [31]. The stable states are realized as the “memory states” of the associative memory. HNN was naturally interested in synthesizing a CAM, with certain “desired stable states” (that are preselected). It is a novel neural computational framework through the implementation of an autoassociative memory. They belong to recurrent or fully interconnected categories of neural networks. All neurons are connected, but there is no selfrecurrent connection between the neurons. HNN consists of N interconnected nodes; each of the nodes can be expressed by a simplified Ising variable of spin glass in statistical mechanics [32]. The neurons in HNN are considered bipolar, L _{ j } ∈ [ −1, 1] conform to the dynamics L _{ j } → sign(h _{ j }) The model dynamical system including the stateevolving equation is calculated based on general asynchronous updating rule of HNN given in equation (1):$${L}_{j}=\{\begin{array}{ll}1,\hfill & \mathrm{i}\mathrm{f}\text{}{\displaystyle \sum _{k}^{N}}{U}_{jk}{L}_{k}(t)>{\tau}_{j}\hfill \\ 1,\hfill & \text{otherwise}\hfill \end{array}$$(1)where U _{ jk } is the synaptic connection matrix that established the strong connection between j and k neurons, L _{ j } defines the unit condition k and τ _{ j } described the threshold function of neurons j. Several studies [16,28,33–38] defined τ _{ j } = 0 to verify that the HNN always leads to a decrease in energy monotonically. Each time neuron was connected with U _{ jk }, the value of the synaptic connection will be preserved as a stored pattern in an interconnected vector where ${U}^{(1)}=\text{}{[{U}_{jk}^{(1)}]}_{n\times n}$ and ${U}^{(2)}=\text{}{[{U}_{jk}^{(2)}]}_{n\times n}$ for Ndimensional variable vectors τ = (τ _{1}, τ _{2}, ..., τ _{ N })^{ T } as observed in [16,20] that the constraint of synaptic weight matrix U ^{(1)} and does not allow selfloop neuron connection ${U}_{jj}^{(2)}={U}_{kk}^{(2)},\mathrm{...},={U}_{ii}^{(2)}=0$ and symmetrical neuron synaptic weight matrix ${U}_{kj}^{(2)}={U}_{jk}^{(2)}$. The HNN energy dynamics function and CAM offers a versatile system with high capacity, error tolerance, rapid memory recovery and partial inputs [20,31,32]. To make HNN ideal for incorporation with combinatorial optimization such as random kSAT. HNN can be used as a logical rule to instruct the behaviour of the system configuration based on the synaptic strength matrix. The logical formulation, in this scenario, consisting of variables vectors, that is framed in the form of N neurons. The implementation of the randomkSAT logical rule in HNN is translated in abbreviated terminology as HNNRkSAT and the main objective is to minimize the logical inconsistencies by reducing network cost functions towards a minimum solution. Random kSAT can be integrated into HNN by assigning each variable with neurons L _{ j } with the defined cost function. Hence, the cost function ${E}_{{\mathbb{Q}}_{RkSAT}}$, that governs the combinations of HNN and random kSAT is given in equation (2).$${E}_{{\mathbb{Q}}_{RkSAT}}={\displaystyle \sum _{j=1}^{NC}}{\displaystyle \prod _{k=1}^{T}}{C}_{jk}$$(2)where NC and T described the number of clauses generated and the number of variable vector in the solution space respectively. The inconsistencies of the logical clause ${E}_{{\mathbb{Q}}_{RkSAT}}$ are provided in equation (3) as follows,$${C}_{jk}=\{\begin{array}{c}\hfill \frac{1}{2}(1{L}_{y}),\mathrm{i}\mathrm{f}\neg \text{}y\hfill \\ \hfill \frac{1}{2}(1+{L}_{y}),\mathrm{i}\mathrm{f}\text{}y\hfill \end{array}$$(3)
The weight matrix will represent the synaptic connection matrix between the clauses and variables in a given logical formula. This is done by equating the cost function of HNN $({E}_{{\mathbb{Q}}_{RkSAT}})$ to the final energy dynamics of the network $({H}_{{\mathbb{Q}}_{RkSAT}})$ symbolizing the neural state of HNN in which each variable vector L _{ j } at a time t is L _{ j }(t). The local field of HNN can be represented in equation (4) as follows;$${h}_{j}(t)={\displaystyle \sum _{j=1,k\ne j}^{N}}{U}_{jk}^{(2)}{L}_{k}+{U}_{j}^{(1)}$$(4) $${L}_{j}(t+1)=\{\begin{array}{ll}1,\hfill & {\displaystyle \sum _{j=1,i\ne k}^{N}}\text{}{U}_{jk}^{(2)}{L}_{k}+{U}_{j}^{(1)}\ge 0\hfill \\ 1,\hfill & {\displaystyle \sum _{j=1,i\ne k}^{N}}\text{}{U}_{jk}^{(2)}{L}_{k}+{U}_{j}^{(1)}<0\hfill \end{array}$$(5)
where ${U}_{ij}^{\left(2\right)}$ and ${U}_{i}^{\left(1\right)}$ are second and first order synaptic weight of the embedded random kSAT. Equation (4) and (5) are vital to ensure the neurons L _{ j } will always converge to ${E}_{{\mathbb{Q}}_{RkSAT}}\to 0$. The Lyapunov energy function ${H}_{{\mathbb{Q}}_{RkSAT}}$ has utilized to ensures the energy dynamics for the network decrease monotonically. The final energy dynamics of HNN is provided in equation (6) as follows;$${H}_{R2SAT}=\frac{1}{2}{\displaystyle \sum _{j=1,j\ne k}^{N}}{\displaystyle \sum _{k=1,i\ne k}^{N}}{U}_{jk}^{(2)}{L}_{j}{L}_{k}{\displaystyle \sum _{j=1}^{N}}{U}_{j}^{(1)}{L}_{k}.$$(6)
The aim is to find a feasible solution for which the cost function ${H}_{{\mathbb{Q}}_{RkSAT}}$ is optimal. The convergence toward minimum energy will be considered as an optimal solution to the optimization search. The value of ${H}_{{\mathbb{Q}}_{RkSAT}}$ signifies the value of the energy concerning the absolute final energy ${H}_{{\mathbb{Q}}_{RkSAT}}^{\mathrm{min}}$ obtained from randomkSAT. Hence the quality of the final neuron configuration can be properly examined based on the condition [15,20–34] defined in equation (7) $$\left{H}_{{\mathbb{Q}}_{RkSAT}}{H}_{{\mathbb{Q}}_{RkSAT}}^{\mathrm{min}}\right\le \xi .$$(7)
If the embedded randomkSAT in HNN does not satisfy equation (7), the final state obtained is most likely trapped in local minimum solution. The synaptic connection, ${U}_{ij}^{\left(2\right)}$ and ${U}_{i}^{\left(1\right)}$ can be effectively obtained by using Wan Abdullah method [14] or Hebbian learning rule [33]. The energy dynamics of an HNN has many local minima. Consequently, the network is likely to reach an equilibrium state which does not satisfy a solution to the problem. A major task in this “field” is to search for evolutionary strategies like MEA to move the network out of local minima.
2.1 Random satisfiabiltiy in discrete hopfield neural network
Random k satisfiability (random kSAT) is a logical representation which consists of a nonsystematic number of a literal per clause. Random kSAT is a variant of Boolean formula that can be represented in conjunctive normal form (CNF). According to [35], in a scalefree model, given n, m and β, to formulate a random kSAT, m clauses are generated from n logic variables independently at random from the set of ${2}^{k}\left(\begin{array}{l}n\\ k\end{array}\right)$ clauses, sampling every possible clause with a probability of 0.5. In this paper, we are limiting our investigation k ≤ 2. For the case of k ≤ 2, random kSAT has the following properties:

A set of variables {x _{1}, x _{2}, x _{3}, … , x _{ n }}, in a clause C _{ i } where i = 1, 2, 3, ..., n that consists of x or − x.

Randomly select 2 variables from the set of n variables with 0.5 probability of negating each variable in the clause [34,35].

Each x _{ i } in C _{ i } is connected by a disjunction (∨) .

Each clause C _{ i } is connected by a conjunction (∧).
The Boolean values for each x _{ i } are bipolar x _{ i } ∈ {− 1, 1} that exemplifies the notion of FALSE and TRUE respectively. One of the examples for random kSAT formulation is as follows:$${\mathbb{Q}}_{R2SAT}=\left(\neg {F}_{1}\vee {F}_{2}\right)\wedge \left({E}_{1}\vee \neg {E}_{2}\right)\wedge \neg D$$(8)where ℚ _{ R−2SAT } described the random 2SAT logical rules that consist of literals F _{1}, F _{2}, E _{1}, E _{2} & D and logical clauses (¬ F _{1} ∨ F _{2}) , (E _{1} ∨ ¬ E _{2}) & ¬ D generated at random. In general, the formulation can be generated in different combinations of neurons (atoms) as the number of neurons (atoms) fluctuated. Comparatively, a high number of neurons (atoms) per number of the clause would increase the probability of many neurons beings satisfied [16–18,34].
The costs function of the negated random 2SAT for bipolar follows.$${E}_{{\mathbb{Q}}_{R2SAT}}=\frac{1}{2}\left(1+{S}_{{F}_{1}}\right)\frac{1}{2}\left(1{S}_{{F}_{2}}\right)+\frac{1}{2}\left(1{S}_{{E}_{1}}\right)\frac{1}{2}\left(1+{S}_{{E}_{2}}\right)+\frac{1}{2}\left(1+{S}_{D}\right)$$(9)
Random 2SAT can be regarded as one of the constrained optimizations in the HNN model. This can be implemented in the network by storing atom truth values and generating a optimize cost function when maximum clauses are fulfilled. Equation (9) can be expanded into equation (10) as follows.$${E}_{{\mathbb{Q}}_{R2SAT}}=\frac{1}{4}({S}_{{F}_{1}}{S}_{{F}_{1}}{S}_{{F}_{2}}+{S}_{{E}_{2}}{S}_{{F}_{2}}{S}_{{E}_{1}}{S}_{{E}_{1}}{S}_{{E}_{2}})+\frac{1}{2}{S}_{D}+1.$$(10)
Consistent interpretation found leading to ${E}_{{\mathbb{Q}}_{R2SAT}}=0$ as the minimum value correlating to the fact that all random 2SAT clause is satisfied. The value ${E}_{{\mathbb{Q}}_{R2SAT}}$ described as proportional to the number of the unsatisfied clause. By applying the cost function in equation (10) to the Lyapunov energy function in equation (6), the respective synaptic weight of HNNR2SAT has been obtained based on equation (11).$${H}_{{\mathbb{Q}}_{R2SAT}}=\frac{1}{2}(2{L}_{\left[{F}_{1}{F}_{2}\right]}^{\left(2\right)}{S}_{{F}_{1}}{S}_{{F}_{2}}+2{L}_{\left[{F}_{1}{E}_{1}\right]}^{\left(2\right)}{S}_{K}{S}_{{E}_{1}}+2{L}_{\left[{F}_{1}{E}_{2}\right]}^{\left(2\right)}{S}_{{F}_{1}}{S}_{{E}_{2}}+2{L}_{\left[{F}_{1}D\right]}^{\left(2\right)}{S}_{{F}_{1}}{S}_{D}\text{}+2{L}_{\left[{F}_{2}{E}_{1}\right]}^{\left(2\right)}{S}_{{F}_{2}}{S}_{{E}_{1}}+2{L}_{\left[{F}_{2}{E}_{2}\right]}^{\left(2\right)}{S}_{{F}_{2}}{S}_{{E}_{2}}+2{L}_{\left[{F}_{2}D\right]}^{\left(2\right)}{S}_{{F}_{2}}{S}_{D}+2{L}_{\left[{E}_{1}{E}_{2}\right]}^{\left(2\right)}{S}_{{E}_{1}}{S}_{{E}_{2}}+2{L}_{\left[{E}_{1}D\right]}^{\left(2\right)}{S}_{{E}_{1}}{S}_{D})({L}_{{F}_{1}}^{\left(1\right)}{S}_{{F}_{1}}+{L}_{{F}_{2}}^{\left(1\right)}{S}_{{F}_{2}}+{L}_{{E}_{1}}^{\left(1\right)}{S}_{{E}_{1}}+{L}_{{E}_{2}}^{\left(1\right)}{S}_{{E}_{2}}+{L}_{D}^{\left(1\right)}{S}_{D}).$$(11)
According to equation (8), ℚ _{ R−kSAT } consist of 5 logical variables that are randomly chosen from the set of predetermined n variables, 2 negative literals and 3 clauses. The outcome of equation (8) can be written as ℚ _{ R−kSAT } = −1 if (F _{1}, F _{2}, E _{1}, E _{2}, D) = (1, 1, 1, 1, 1) with 2 clauses were satisfied. The global minimum energy is obtained ${H}_{{\mathbb{Q}}_{R2SAT}}^{\mathrm{min}}=1$. The properties of HNN can be used to govern the behaviour of random kSAT. In term of HNNRkSATEA, is a brandnew model as there no effort utilized the benefits of modified Election algorithm in accelerating the learning performance of HNN in finding the optimal solution to randomkSAT logic representation. The proposed hybrid model developed based on randomkSAT logical clauses. The main focus of this study is, therefore, to explore the feasibility of the hybrid modified election algorithm incorporated in the Hopfield network model optimal randomkSAT logical representation, therefore, HNNRkSATEA will equate the efficacy of the proposed model with other current Hopfield models (HNNRkSATES and HNNRkSATICA).
3 Neurosearching techniques
3.1 Exhaustive search algorithm
The aim of selecting the traditional searching framework is to determine the degree of efficacy of HNNR2SATES model in carrying out random 2SAT logic programming. Apart from that, there are feasible satisfiable assignments given for any random 2SA logical representation [34–36]. The satisfied clause for the ES heuristic is extracted after a brutal “trial and error” procedure is carried out. The exhaustive search does the survey required to produce a precise topographical map. This approach requires that an extremely large but limited solution space be checked with the number of combinations of various variable values.
The objectives function of ES is expressed as follows:$$\mathrm{max}\left{f}_{ES}\right$$(12)where,$$f={\displaystyle \prod _{j=1}^{NC}}{C}_{j}$$(13)
Stage 1: Initialization
The random population of an individual in the form of a variable vector S _{ j } where S _{ j }(t) ∈ [ −1, 1] is initiated.
Stage 2: present input vector
Present the input pattern S _{ j } = (S _{ j1}, S _{ j2}, ..., S _{ jN }) to the network and store it, where j = (1, 2, ..., m).
Stage 3: Fitness Evaluation
The variable vector is tested based on the fitness S _{ j }(t) to a quantified variable position in the solution space,$${f}_{j}({p}_{j}(t))={\displaystyle \sum _{j=1}^{NC}}{C}_{j}$$(14)where f _{ j } designates the number of different neurons combination, NC designates the number of a logical clauses in the random 2SAT and C _{ j } the number of different values of logical clauses passed through the eligibility stage as follows;$${C}_{j}=\{\begin{array}{ll}1,\hfill & \mathrm{e}\mathrm{l}\mathrm{i}\mathrm{g}\mathrm{i}\mathrm{b}\mathrm{l}\mathrm{e}\hfill \\ 0,\hfill & \mathrm{o}\mathrm{t}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e}\hfill \end{array}$$(15)
Stage 4: clause evaluation
Preserve the clause with the highest possible fitness. Otherwise, identify a new candidate variable vector. Exhaustive searches do not get trapped in local minima with fine enough sampling and worth for both continuous and discontinuous variables function [30,34]. Nonetheless, achieving a global minimum requires an extremely long time. Another drawback of this strategy is that the global minimum may be skipped because of undersampling. When measuring the cost function, it is simple to use the sample whenever it takes a long time. For this reason, exhaustive searches are only practical for a small number of variables as observed in [16,17,20].
3.2 Election algorithm (EA)
Election algorithm is a simple computational system inspired by the sociopolitical phenomenon of presidential election conducted in many countries. The MEA begins by describing the optimization variables, the cost function, and the cost. It ends by checking for convergence like other optimization algorithms. It is considered a robust evolutionary methodology attracting a talented amount of research in optimization research. If an environment is considered as a function, MEA serves as a function optimizer. In this case, each individual in a population is a sample point in the function space [28,29].
In EA simulation, random weight matrix (population) is generated. In every phase, the weight matrix will be updated through the campaign and coalition mechanism, and their eligibility values will be evaluated to ascertain the best candidates or voters' position. The cycle of reorganizing the new weight vector variable with repeat the best individuals vector and continue the searching until an appropriate solution is reached (best solution) [28,29]. The HNN energy dynamics is used as the second eligibility assessment method to pick the most appropriate weight matrix to solve the problem of randomkSAT. On the contrary, the basic motivation of the election algorithm is to discover the variables that optimize the number of random clauses optimize before joining the HNN. The election algorithm in randomkSAT is specifically made up of distinctive stages 1–5.
In general, global optimization can be demonstrated as below (without the loss of generality minimization problem being considered as follows.$$\mathrm{max}\left{f}_{MEA}\right$$(16) $${f}_{MEA}={\displaystyle \prod _{j=1}^{NC}}{C}_{j}$$(17)
where f _{ MEA } is the eligibility function (Objectives), NC describes as the number of a clause in random ‑kSAT and C _{ j } designates the number of clauses tested by the EA given as follows,$${C}_{j}=\{\begin{array}{l}1,\text{}\mathrm{e}\mathrm{l}\mathrm{i}\mathrm{g}\mathrm{i}\mathrm{b}\mathrm{l}\mathrm{e}\\ \mathrm{0},\text{}\mathrm{o}\mathrm{t}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e}\end{array}$$(18)
Stage 1: Initialization
A random population of individuals are initialized based on certain features and the size of a population in form decision variables, p _{ j }, where p _{ j }(t) ∈ [1, −1]. Each interpretation is a potential solution to the randomkSAT clause randomly generated. The MEA procedure begins by splitting the entire population into P parties. Initially, the number of individuals will be denoted by N _{ p } in each party, in which, N _{1}, ..., = N _{ p }, ..., = N _{ P }, where, p = 1, 2, ...P
Stage 2: Eligibility evaluation
All randomized variable vector will undergo eligibility function assessment f _{ MEA }. Each of the correct variable vectors yields in the satisfied randomkSAT clause will be “awarded”. The number of achieved clauses constitutes the eligibility of the individuals (variable) during the eligibility assessment. The objective function S _{ j }(t) of the election algorithm is as follows.$${f}_{j}({p}_{j}(t))={\displaystyle \sum _{j=1}^{NC}}{C}_{j}.$$(19)
The role of the eligibility function is to measure the goodness of each variable.
Stage 3: Create initial parties and their supporters.
Split the sampling space connected with each variable vector x _{ j }, j = 1, ..., N into P party's space i.e.$$\forall {x}_{j}{N}_{c}^{p}=[{N}_{c}^{l,p}\text{,}{N}_{c}^{u,p}]\text{,}p=1,\mathrm{...},P,j=1,\mathrm{...}N$$(20)from each party p = 1, ..., P subspace, ${N}_{c}^{p}=[{N}_{c}^{1},{N}_{c}^{2},\mathrm{...},{N}_{c}^{P}]$ associated with each variable vector x _{ j }, j = 1, ..., N, N _{ P } values are selected and assessed based on the associated objective functions. Then, the individuals connected with each party p = 1, ..., P. p = 1, ..., P could be designated as follows:
$\forall pf({v}_{1,{m}_{p}},\mathrm{...},{v}_{j,{m}_{p}},\mathrm{...},{v}_{N,{m}_{p}}),{m}_{p}=1,\mathrm{...},{M}_{P}$
Or
$$p=\left[\begin{array}{l}{v}_{1,1},\mathrm{...},{v}_{j,1},\mathrm{...},{v}_{N,1}\\ \vdots \\ {v}_{1,{m}_{p}},\mathrm{...},{v}_{j,{m}_{p}},\mathrm{...},{v}_{N,{m}_{p}}\\ \vdots \\ {v}_{1,{M}_{P}},\mathrm{...},{v}_{j,{M}_{P}},\mathrm{...},{v}_{N,{M}_{P}}\end{array}\right],p=1,\mathrm{...},P.$$(21)
Select the best value to serve as the initial number of candidates N _{ C } in forming the initial parties [28]. Thus, the number of individuals sampled as initial candidates represented in equation (22) as follows,$${N}_{C}={\tau}_{r}{N}_{pop}$$(22)where N _{ C } described the total number of candidates, τ _{ r } describe the candidate rate, N _{ pop } defined as the total population. The remaining is the total number of such candidates' advocates (voters) which is represented in equation (23) as follows,$${\mu}_{v}={N}_{pop}{N}_{C}$$(23)where μ _{ v } described the total number of voters, N _{ pop } is the total population and N _{ C } designated as the total number of candidates. All supporters are divided among the candidates based on their similarity. The Euclidian distance matrix is used as a similarity measure is used as follows.$$d({v}_{i},{c}_{j})=\mathrm{d}\mathrm{i}\mathrm{s}\mathrm{t}\left({\delta}_{ir}^{f},{\delta}_{jr}^{f}\right)$$(24)where ${\delta}_{ir}^{f}$ indicates the voter's eligibility and ${\delta}_{jr}^{f}$ indicates the jth candidate's eligibility. The party whose candidate is closest to it is assigned to each voter.
Stage 4: Advertising Campaign: this stage involves three steps as follows.
Step 1: Positive advertisement
The process of modelling EA positive mechanism, the vector variable of a candidate in the solution space is randomly selected. Random numbers are sampled to pick the position of vector variables (voters) to be substituted. The selection rate is donated by λ _{ s } = rand ∈ [0, 1]. The number of variable vector values that are transferred from a candidate toward its supporters is represented in eq. (25) as follows.$${\psi}_{S}={\lambda}_{S}{S}_{C}$$(25)where ψ _{ S } is defined as the number of sampled vector variables to be replaced λ _{ S } define the selection rate and S _{ C } described the total number of vector variables of the candidate in the solution space. It is clear that, in an MEA party based on the eligibility Euclidean metric between a candidate and its relevant supporters, the effectiveness of advertisement varies. Positive advertisement happens, whereby the candidates that seem to have an excellent plan and idea if the decision was taken by the voters is to be influenced, the number of its supporters will increase and the chances of increasing the quality of the party's plans will increase.
To model this goal, we represented eligibility distance coefficient (π _{ e }) as follows.$${\pi}_{e}=\frac{1}{\parallel {\rho}_{e}{\rho}_{v}\parallel +1}$$(26) where ρ _{ e } and ρ _{ v } designated the eligibility of the candidate e and v, respectively. MEA applied the Euclidean metric to measures the distance between the vector variable ρ _{ e } and ρ _{ v } in the solution space. In MEA, the advertising mechanism, after selecting λ _{ S } and measured π _{ e } the vector values of identified vector variables from the candidate are multiplied in coefficient π _{ e } and the replaced with the identified vector of the associated voters. In other words, given ϵ _{iold} be the value of ith chosen vector variables of the voters before advertising advances, then after a campaign, the updated value of the identified vector is given as follows [28,29].$${\u03f5}_{\mathrm{i}\mathrm{n}\mathrm{e}\mathrm{w}}={\pi}_{e}{\u03f5}_{i}$$(27)
based on the equation (27) in the campaign process, the near supporters are much more influenced by their associated candidate than by other followers.
Step 2: Negative advertisement
In the implementations of MEA, contrast advertisement is used among different negative campaigning strategy. Candidates, by their campaign of resistance, seek to fascinate the members of other parties towards themselves. This leads to an upsurge in support of the popular parties and a decline in popularity of the marginalized parties. The difference in eligibility between the voters and the candidates is measured at first in the defender group obeying the Euclidean metric in equation (24).
Step 3: Coalition
Candidates confederate if they shared the same ideas; In MEA, sometimes two or more parties with the same ideas and goals in solution space can come together to create a new party. So some candidates are leaving the campaign and joining another one called “leader.” The candidate leaving the arena of the election is called “follower” The candidate leaving the election arena is referred to as “follower.” The candidates of the followers collate with the leader and encourage their supporters to follow the leader. All the followers' supporters become the leader's supporters. In our applications, among the candidates ${N}_{c}^{p}=[{N}_{c}^{1},{N}_{c}^{2},\mathrm{...},{N}_{c}^{P}]$ who wish to unite, a candidate is randomly selected obeying equation (28) as the successor candidate and the remaining candidates are considered as followers [28,29].
If parties candidates unite occurs, the rest of the candidates ${N}_{c}^{p}$ will be randomized according to the following equation;
${N}_{c}^{p}=\{\begin{array}{cc}\hfill {N}_{c}^{p},\hfill & \hfill \mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}\left[0,1\right]\ge \xi \hfill \\ \hfill {N}_{P},\hfill & \hfill \mathrm{O}\mathrm{t}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e}\hfill \end{array}$(28)
where c, p ∊ [1, N] and ξ = rand_{ j } . [0, 1]. Other remaining candidates and their supporters will be allocated to a new candidate.
Stage 5: Election Day (Stopping condition)
Until a condition of termination is met, three different operators, positive advertising, negative advertising and coalition will be applied to update the population. Ultimately a candidate who gets the most votes will declare himself the winner and is equal to the best solution found for the problem of optimization and search [28]. Modified election algorithm incorporated in the Hopfield neural network model to carry out randomkSAT is expected to be feasible. It the vector variable of does not achieve the desired eligibility; the present vector variable bits will continue to improve during the negative campaign and coalition strategies. To boost the solution space, 100 to 10000 iterations usually set in most of the metaheuristics is also considered in our case [16,20,34]. A bipolar search involving on 1 and −1 is used since it is simple for a variable to converge to global maxima. In this research, a modified election algorithm is proposed to accelerate the learning phase of the HNN as a single model in optimizing random 2SAT logic representation.
4 HNN model experimental setup
In this study, MEA has been incorporated in HNN in the search for an optimal solution for random kSAT logic representation. The proposed hybrid computational model will be compared with the existing HNNrandom kSATES [16] and HNNrandom 2SATICA [37] models. Both HNN models employed simulated datasets to establish random kSAT logical clauses. To achieve a meaningful comparison between the existing HNN models, all source code was formulated based on the simulation program developed in Dev C++ release version 5.11 is running on a device with Intel® Celeron® CPU B800@2GHz processor with 4 GB RAM utilizing Windows 8.1. Table 1–3 indicates the appropriate parameters during each HNN model execution.
List of parameters used in HNNR2SATEA.
4.1 Experimental results & discussion
In this study, the training phase of the HNNrandom kSATEA model in comparison against the other existing HNN models in the literature will be explored. To prove the efficacy in the performance of HNNrandom kSATEA model, we compared the proposed algorithm with HNNrandom kSATES and HNNrandom kSATICA in finding the global optimum based on the global minimum ratio (Zm), mean absolute error (MAE), Bayesian information criterion (BIC) and execution time (ET).
4.1.1 Global minima ratio (Zm)
This describes the ratio of global minimum energy combined with the number of neurons in the solution space [23]. The Z_{m} formula is given in equation (29) as follows.$${Z}_{m}=\frac{1}{(NN)(COMBMAX)}{\displaystyle \sum _{i=1}^{n}}{H}_{{\mathbb{Q}}_{R2SAT}}^{\mathrm{min}}$$(29)where NN represents the number of neurons trials, ${H}_{{\mathbb{Q}}_{R2SAT}}^{\mathrm{min}}$ described the global minimum solutions achieved and COMBMAX described the maximum neurons combination generated in the solution space.
4.1.2 Mean absolute error (MAE)
This described as the average difference found between the expected and the actual values in the solution space in each data. It is used to ascertain the proximity of forecasts to potential outcomes in a given distribution. It is commonly used because it can estimate the error in the data. It is identified as one of the effective metrics used to identify the accumulation of uniformly distributed error in each sample [34,37,38]. The MAE equation is presented as follows.$$MAE={\displaystyle \sum _{i=1}^{n}}\frac{1}{n}\left{f}_{\mathrm{max}}{f}_{i}\right$$(30)where f _{ i } and f _{max} describe the fitness value observed and the maximum fitness respectively.
4.1.3 Bayesian information criterion (BIC)
This described a criterion for the selection of models amongst a finite set of models in a given distribution. It is used to assess the computational efficiency of a model. The Mean square error (MSE) is taken into consideration when computing the BIC values. In general, when the MSE is lower, the BIC tends to be lower. The model which has the minimum BIC value is regarded as the model of interest.
Cumulatively, the MSE will be used to evaluate the BIC value during the training and recovery process. The BIC formula is described as follows:$$\text{BIC}=nIn(MSE)+pa.In(n)$$(31)
where n, pa, and MSE indicate the number of solutions obtained, its parameters and the mean square error used in the model respectively. Since the HNN is free from any pa [17], the equation is rewritten as,$$\text{BIC}=nIn(MSE)$$(32)where the MSE is measured in calculating BIC and n depicts the number of iterations during the simulation. Hence, the formula of MSE is given as follows,$$MSE=\frac{1}{n}{\displaystyle \sum _{i=1}^{n}}{(q)}^{2}$$(33)where q is the difference between f _{ i } and f _{max} describe the fitness value observed during each of the execution and the maximum fitness respectively.
4.1.4 Execution time (ET)
Execution time or commonly known as a computational time, represent the time taken to complete an implementation cycle [17,19]. It is represented in equation (34) as follows,$$\text{E}\_\text{Time}(\text{s})=\mathrm{T}\mathrm{r}\mathrm{a}\mathrm{i}\mathrm{n}\mathrm{i}\mathrm{n}\mathrm{g}\_\mathrm{T}\mathrm{i}\mathrm{m}\mathrm{e}(\mathrm{s})+\mathrm{Re}\mathrm{cov}\mathrm{e}\mathrm{r}\mathrm{y}\_\mathrm{T}\mathrm{i}\mathrm{m}\mathrm{e}(\mathrm{s})$$(34)
5 Results and discussions
Figures 1–4 show search performances between the proposed model and other existing models. It exhibits the behaviours of the errors observed in the searching process from 5 ≤ NN ≤ 100. In general, the HNNRkSATES algorithm produces the highest number of errors as demonstrated in Figures 2 and 3. When the problem becomes larger, the disparity is more noticeable (as the number of neurons increases, HNNR kSATEA manages to maintain lower error margin compares with the existing models whose errors margin increases more as the number of neurons raises). The supremacy of the MEA in searching for the optimal solution for RkSAT logical representation is shown in our study.
Figure 1 shows the apparent success of the modified election algorithm (EA) in comparison with the exhaustive search (ES) and imperialist competitive algorithm (ICA) in generating the optimally satisfied clauses. According to Figure 1, HNNRkSATEA and HNNRkSATES can retrieve a more accurate state than HNNRkSATICA. Meanwhile, the ES stressed the ‘exhaustive’ trial and error searching techniques during clauses compliance and could only accommodate a maximum of NN ≤ 60, as the learning model crosses the CPU time threshold. This can be due to the nature of a thorough search, which raised the pressure of computing to achieve the correct configuration. On the other hand, the ability of MEA to control a high number of neurons from 5 ≤ NN ≤ 100 may be due to the sheer potential of campaign and coalition mechanism and revolutionary and competition operator in ICA, that reduce the computation burden in finding the optimal states. According to trend displays in Figure 1, HNNrandom 2SATICA model some neurons states get trapped in suboptimal solution at NN = 35, 45, 95 but still manages to achieve more than 90% success. HNNRkSATEA model indicates a greater efficiency in neurosearching in optimizing RkSAT logic representation. As the constraints expand forever, the network becomes more difficult as the NN rises in terms of the program's complexity. The clarification on the connection between the global minimum ratio and the existence of the energy obtained at the end of the computation cycle has been elaborated in [16,17,20] and [34]. Potentially, if the global minimum ratio of the implemented hybrid network is close to one, almost all system solutions have achieved global minimum energy (100% satisfied clauses).
Figures 2 and 3 the MAE and BIC evaluation of various HNN model in optimizing random 2SAT logic representation. It recorded the models' performance throughout the HNN learning phase from 5 ≤ NN ≤ 100. It is explicitly shown that HNNRkSATEA outshines the HNNRkSATES and HNNRkSATICA models based on the MAE and BIC assessments. The HNNR2SATES show a steady rise in errors due to the bruteforce hunting for satisfiability mapping from 5 ≤ NN ≤ 60. However, ICA manifests close margin with EA from 5 ≤ NN ≤ 10 and very high margin as the number of neurons increases. In term of BIC, both EA and ICA display closed margin with EA with lower BIC at 5 ≤ NN ≤ 60. The ICA displays a growing pattern of angularity, but of lower error than the ES. The colonial operator's efficiency decreases the iterations resulted in finding global solutions better than ES. HNNRkSATEA outclasses the HNNRkSATES and HNNRkSATICA based on MAE and BIC measures. This is because the optimization mechanism, such as a coalition in the EA searching process, is much simpler without requiring additional iterations to reach a satisfying assignment. The nonimproving solution will be enhanced by a coalition strategy during the HNN learning phase. The HNNR2SATES model observed to have reported an accumulation of MSE during the learning stage, as a result, more iteration needed to achieve global convergence. The accumulation of MSE tends to penalize the values of BIC. The BIC for HNNR2SATES is, therefore, the highest compared to the other two models. In terms of MAE and BIC assessment, EA is an acceptable approach in HNN in doing random 2SAT logic programming compared to ES and ICA.
Figure 4 demonstrates the execution time for our proposed model, HNNRkSATEA in comparison with HNNRkSATICA and conventional HNNRkSATES models. A glance at the running time shows that the program is becoming more complex, that it takes more effort to find global solutions. However, all models under study display close time range from 5 ≤ NN ≤ 50. HNNR2SATES consumes more time in searching from 50 ≤ NN ≤ 60. HNNRkSATES and HNNRkSATICA show closed time agreement from 5 ≤ NN ≤ 80. According to Figure 5, our proposed model HNNR2SATEA requires less execution time compared to HNNR2SATES and HNNR2SATICA. Because more neurons are required to cross the energy barrier to relax in global solutions throughout the training phases of HNN. HNNRkSATEA demonstrates the ability to be a more stable run when it produces a nearzero value of MAE, BIC and ET in the training phase of HNN, which is lower compared to other existing models. From this, it can be deduced that hybrid EA can sustain a very low value of errors in improving the training phase of HNN for optimizing randomkSAT logical representation. In other words, when the number of neurons rises, the accumulation of errors was less. Furthermore, the HNNRkSATEA required less iteration to find the desired solutions relative to other existing models. In this case, HNNRkSATES required further iterations to find a global solution, because the interpretation would collapse if the optimum fitness were not reached. This will then lead to the hunting of new interpretation with a new eligibility value. Regarding ES searching process, a suitable assignment is achieved after the ‘trial and error' process has been carried out thoroughly [16,17,20,34].
The chances of getting more clauses satisfied (optimal) improved significantly as candidates reinforcing their positive images and qualities through campaign mechanisms. The positive campaign mechanism hopefully attracts an unsatisfied clause into a satisfied clause, this will force the model to converge to optional solution space quickly [28,29]. Sometimes RkSAT clause in the solution space may fail to give the best solution, revealing a defect that the algorithm tends to be trapped into local optima. To avoid the problem of premature convergence, balance the efficiency and global search ability of the model. EA applies the negative campaign mechanism to explore other areas of solution space to keep the clause away from an unsatisfied solution before exploring the entire solution space. This effort leads up to find the “new” search space which helps the network to avoid local maxima (nonimproving solution). Therefore, because the alliance structure of the HNNRkSATEA model can combat uncertainty, the probability for global solutions is increased. As a result, these mechanisms will permit the network to search and compute the feasible solutions within a reasonable execution time. This explores higher feasibility of neurosearching capacity exhibits by the HNNRkSATEA model compares with the other existing models.
The searching techniques will work extensively throughout the training phase for doing RkSAT programming as the complexity of the neurons rises. Effective search techniques with optimization and robust operators are needed. The justification for this trend is that a higher number of neurons convoluted. HNNRkSAT problem is to find the consistent RkSAT assessment since the chances of finding a clear solution significantly reduced. As it stands, HNNRkSATES requires a substantial amount of time to reach the optimal eligibility [16,17,34] At NN = 90, HNNRkSATES failed to maintain the complexity of the training phase and the HNN model is observed to trapped in trial and error state during the learning phase.
The MEA is more successful in searching desired outcomes, even in the case of literals of high complexity. This is attributed to the campaign and coalition process to solve greater eligibility within a shorter timeframe. These features in MEA lead HNNRkSAT to complete the learning phase in a reasonable amount of time. HNNRkSATEA is confirmed to have completed the learning process slightly quicker than other existing models in the literature. However, all HNN models remain competent in optimizing the RkSAT and sits variant and compute the global solution within a feasible CPU timeframe
Fig. 1 Flowchart of research methodology. 
Fig. 2 Zm for HNNR2SATEA, HNNR2SATES & HNNR2SATICA. 
Fig. 3 MEA for HNNR2SATEA, HNNR2SATES & HNNR2SATICA. 
Fig. 4 BIC for HNNR2SATEA, HNNR2SATES & HNNR2SATICA. 
Fig. 5 ET for HNNR2SATEA, HNNR2SATES & HNNR2SATICA. 
6 Conclusion
From the results obtained from simulations, it can be ascertained that our proposed HNNR2SATEA model is the more improved and robust heuristic compared to HNNR2SATICA and HNN2SATES in accelerating the learning phase of HNN in carrying out a random2SAT logic program. Our proposed model outperformed HNNR2SATICA and HNNR2SATES model. This has been established from the reported results in term of Zm, MAE, BIC and ET and more interestingly, a Zm of 1 is generated throughout the runs even with the complexity of the network. This also leads to the conclusion that EA is much more robust to boost the training phase of HNN for random 2SAT logic program and it is closest towards the global minimum, irrespective of the NN release into the HNN. Finally, our future direction will focus on the implementing other metaheuristics approaches (such as dragonfly algorithm, Particle swarm optimzation etc) to accelerate the computational phase of Hopfield neural network model in find optimal solution satisfiability and other combinatorial optimization problems (such as reliability problem, scheduling problem etc).
References
 M. YahayaPudza, Z. ZainalAbidin, S. Abdul Rashid, F. MdYasin, A.S.M. Noor, M.A. Issa, Sustainable synthesis processes for carbon dots through response surface methodology and artificial neural network, Processes 7, 704 (2019) [CrossRef] [Google Scholar]
 S. Guessasma, D. Bassir, Neural network computation for the evaluation of process rendering: application to thermally sprayed coatings, Int. J. Simul. Multidiscipl. Des. Optim. 8, A10 (2017) [CrossRef] [Google Scholar]
 J. Hopfield, D. Tank, Neural' computation of decisions in optimization problems, Biol. Cybernet. 52, 141–152 (1985) [Google Scholar]
 J. Hertz, R. Krogh, G. Palmer, Introduction to the Theory of Neural Computation (AddisonWesley, Reading, MA, 1991) [Google Scholar]
 U.P. Wen, K.M. Lan, H.S. Shih, A review of Hopfield neural networks for solving mathematical programming problems, Eur. J. Oper. Res. 198, 675–687 (2009) [CrossRef] [Google Scholar]
 J.H. Park, Y.S. Kim, I.K. Eom, K.Y. Lee, Economic load dispatch for piecewise quadratic cost function using Hopfield neural network, IEEE Trans. Power Syst. 83, 1030–1038 (1993) [CrossRef] [Google Scholar]
 M.Q. Nguyen, P.M. Atkinson, H.G. Lewis, Superresolution mapping using a Hopfield neural network with fused images, IEEE Trans. Geosci. Remote Sens. 44, 736–749 (2006) [CrossRef] [Google Scholar]
 C.H. Fung, M.S. Wong, P.W. Chan, Spatiotemporal data fusion for satellite images using Hopfield neural network, Remote Sens. 11, 2077 (2019) [CrossRef] [Google Scholar]
 T.L. Duong, P.D. Nguyen, V.D. Phan, D.N. Vo, T.T. Nguyen, Optimal load dispatch in competitive electricity market by using different models of hopfield lagrange network, Energies 12, 2932 (2019) [CrossRef] [Google Scholar]
 A. Kzar, M. Jafri, K. Mutter, S. Syahreza, A modified Hopfield neural network algorithm (MHNNA) using ALOS image for water quality mapping, Int. J. Environ. Res. Public Health 13, 1–92 (2016) [Google Scholar]
 S.A. Cook, The complexity of theoremproving procedures, in Proceedings of the third annual ACM symposium on Theory of computing , 1971, 151–158 [CrossRef] [Google Scholar]
 J. MarquesSilva, Practical applications of boolean satisfiability, in 2008 9th International Workshop on Discrete Event Systems , 2008, 74–80 [CrossRef] [Google Scholar]
 C. Barrett, C. Tinelli, Satisfiability modulo theories, in Handbook of Model Checking (Springer, Cham, 2018), pp. 305–343 [CrossRef] [Google Scholar]
 Y. Shoukry, P. Nuzzo, A.L. SangiovanniVincentelli, S.A. Seshia, G.J. Pappas, P. Tabuada, SMC: satisfiability modulo convex optimization, in Proceedings of the 20th International Conference on Hybrid Systems: Computation and Control , 2017, 19–28 [Google Scholar]
 X. Sun, H. Khedr, Y. Shoukry, Formal verification of neural network controlled autonomous systems, in Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control 2019, 147–156 [Google Scholar]
 F. Capelli, Knowledge compilation languages as proof systems, in International Conference on Theory and Applications of Satisfiability Testing (Springer, Cham, 2019), pp. 90–99 [Google Scholar]
 J. Li, S. Zhu, G. Pu, L. Zhang, M. Vardi, YSATbased explicit LTL reasoning and its application to satisfiability checking, Formal Methods Syst. Des. 54, 164–190 (2019) [CrossRef] [Google Scholar]
 S. SalcedoSanz, R.R. SantiagoMozos, C. BousonoCalzon, A hybrid Hopfield networksimulated annealing approach for frequency assignment in satellite communications systems, IEEE Trans. Syst. Man Cybern. B 34, 1108–1116 (2004) [CrossRef] [Google Scholar]
 R.A. Kowalski, The Logic for Problem Solving (Elsevier Science Publishing, New York, 1979) [Google Scholar]
 G. Pinkas, Symmetric neural networks and propositional logic SAT, Neural Comput. 3, 282–291 (1991) [CrossRef] [Google Scholar]
 A.T.W. Wan Abdullah, Logic programming on a neural network, Int. J. Intell. Syst. 7, 513–519 (1992) [CrossRef] [Google Scholar]
 S. Sathasivam, Upgrading logic programming in Hopfield nets, Sains Malays. 39, 115–118 (2010) [Google Scholar]
 S. Sathasivam, Boltzmann machine and new activation function comparison, Appl. Math. Sci. 78, 3853–3860 (2011) [Google Scholar]
 N. Hamadneh, S. Sathasivam, S.L. Tilahun, O.H. Choon, Learning logic programming in radial basis function network via genetic algorithm, J. Appl. Sci. 12, 840–847 (2012) [CrossRef] [Google Scholar]
 M. Velavan, Z.R. Yahya, M.N. Abdul Halif, S. Sathasivam, Meanfield theory in doing logic programming using a Hopfield network, Mod. Appl. Sci. 10, 154–160 (2016) [CrossRef] [Google Scholar]
 S. Alzaeemi, M.A. Mansor, M.S.M. Kasihmuddin, S. Sathasivam, M. Mamat, Radial basis function neural network for 2 satisfiability programming, Indonesian J. Electr. Eng. Comput. Sci. 18, 459–469 (2020) [CrossRef] [Google Scholar]
 S.A.S. Alzaeemi, S. Sathasivam, Hopfield neural network in agent based modeling, MOJ Appl. Biol. Biomech. 2, 334–341 (2018) [Google Scholar]
 H. Emami, F. Derakhshan, Election algorithm: a new sociopolitically inspired strategy, AI Commun. 28, 591–603 (2015) [CrossRef] [Google Scholar]
 M. Kumar, A.J. Kulkarni, Socioinspired optimization metaheuristics: a review, in Sociocultural Inspired Metaheuristics (Springer, Singapore, 2019), pp. 241–265 [CrossRef] [Google Scholar]
 G. Gosti, V. Folli, M. Leonetti, G. Ruocco, Beyond the maximum storage capacity limit in Hopfield recurrent neural networks, Entropy 21, 726 (2019) [CrossRef] [Google Scholar]
 G. Gosti, V. Folli, M. Leonetti, G. Ruocco, Beyond the maximum storage capacity limit in Hopfield recurrent neural networks, Entropy 21, 726 (2019) [CrossRef] [Google Scholar]
 A. Barra, M. Beccaria, A. Fachechi, A new mechanical approach to handle generalized Hopfield neural networks, Neural Netw. 106, 205–222 (2018) [CrossRef] [Google Scholar]
 W. Gerstner, W.M. Kistler, Mathematical formulations of Hebbian learning, Biol. Cybern. 87, 404–415 (2002) [CrossRef] [Google Scholar]
 S. Sathasivam, M. Mansor, M.S.M. Kasihmuddin, H. Abubakar, Election algorithm for random k satisfiability in the Hopfield neural network, Processes 8, 568 (2020) [CrossRef] [Google Scholar]
 W. Fernandez de la Vega, Random 2SAT: results and problems, Theor. Comput. Sci. 265, 131–146 (2001) [CrossRef] [Google Scholar]
 D. Du, J. Gu, P.M. Pardalos, Satisfiability Problem: Theory and Applications (American Mathematical Society, 1997), p. 35 [Google Scholar]
 K. Vigneshwer, M.A. Mansor, M.S.M. Kasihmuddin, S. Sathasivam, Hybrid imperialistic competitive algorithm incorporated with Hopfield neural network for robust 3 satisfiability logic programming, IAES Int. J. Artif. Intell. 8, 144–155 (2019) [Google Scholar]
 M. Peng, N.K. Gupta, A.F. Armitage, An investigation into the improvement of local minima of the Hopfield Network, Neural Netw. 90, 207–212 (1996) [Google Scholar]
Cite this article as: Hamza Abubakar, Shamsul Rijal Muhammad Sabri, Sagir Abdu Masanawa, Surajo Yusuf, Modified election algorithm in hopfield neural network for optimal random k satisfiability representation, Int. J. Simul. Multidisci. Des. Optim. 11, 16 (2020)
All Tables
All Figures
Fig. 1 Flowchart of research methodology. 

In the text 
Fig. 2 Zm for HNNR2SATEA, HNNR2SATES & HNNR2SATICA. 

In the text 
Fig. 3 MEA for HNNR2SATEA, HNNR2SATES & HNNR2SATICA. 

In the text 
Fig. 4 BIC for HNNR2SATEA, HNNR2SATES & HNNR2SATICA. 

In the text 
Fig. 5 ET for HNNR2SATEA, HNNR2SATES & HNNR2SATICA. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.