# Validated predictive computational methods for surface charge in heterogeneous functional materials: HeteroFoaM™

- Kenneth L Reifsnider
^{1}Email author, - Dan G Cacuci
^{1}, - Jeffrey Baker
^{1}, - Jon Michael Adkins
^{1}and - Fazle Rabbi
^{1}

**1**:3

https://doi.org/10.1186/s40759-014-0001-y

© Reifsnider et al.; licensee Springer. 2015

**Received: **28 August 2014

**Accepted: **25 November 2014

**Published: **5 May 2015

## Abstract

### Background

Essentially all heterogeneous materials are dielectric, i.e., they are imperfect conductors that generally display internal charge displacements that create dissipation and local charge accumulation at interfaces. Over the last few years, the authors have focused on the development of an understanding of such behaviour in heterogeneous functional materials for energy conversion and storage, called HeteroFoaM (www.HeteroFoaM.com). Using paradigm problems, this work will indicate major directions for developing generally applicable methods for the multiphysics, multi-scale design of heterogeneous functional materials.

### Methods

The present paper outlines the foundation for developing validated predictive computational methods that can be used in the design of multi-phase heterogeneous functional materials, or HeteroFoaM, as a genre of materials. Such methods will be capable of designing not only the constituent materials and their interactions, but also the morphology of the shape, size, surfaces and interfaces that define the heterogeneity and the resulting functional response of the material system.

### Results

Relationships to applications which drive this development are identified. A paradigm problem based on dielectric response is formulated and discussed in context.

### Conclusions

We report an approach that defines a methodology for designing not only the constituent material properties and their interactions in a heterogeneous dielectric material system, but also the morphology of the shape, size, surface, and interfaces that defines the heterogeneity and the resulting functional response of that system.

### Keywords

Heterogeneous materials Dielectric Surface charge Predictive modelling Computational methods## Background

### Heterogeneous materials

Essentially all heterogeneous materials are dielectric, i.e., they are imperfect conductors that generally display internal charge displacements that create dissipation and local charge accumulation at interfaces. Over the last few years, the authors have focused on the development of an understanding of such behaviour in heterogeneous functional materials for energy conversion and storage, called HeteroFoaM (www.HeteroFoaM.com). Using paradigm problems, this work will indicate major directions for developing generally applicable methods for the multiphysics, multi-scale design of heterogeneous functional materials.

_{2}O, the product gas, which is transported away. But the electrochemical reaction just described requires a “triple point boundary” where the ionic and electronic conductors are exposed to the hydrogen containing fuel gas to enable the chemical reaction that drives the energy conversion process. Triple point boundaries in Figure 1 are defined by the interfaces of the light and dark particles that are exposed to the void phase.

Although similar discussions could be advanced for the other microstructures shown in Figure 1, the SOFC example exemplifies the problem addressed in the present paper. The computational methods that we envisage developing for the design of HeteroFoaM systems must provide a foundation for the design of all of the local details of microstructure as well as support and represent the controlling physics that drives the functionality of the materials system. Several computational methods for these general problems have already been developed and will be discussed (Liu 2012; Liu 2011a; Liu & Reifsnider 2013; Reifsnider et al. 2013). An extensive experimental program has been conducted to validate our understanding at the fundamental level (Baker et al. 2014). However, the charge that accumulates at the material boundaries in such heterogeneous material mixtures is not yet fully described and understood, and provides the focus of the present paper.

## Methods

*ρ*= ∇ ·

*D*, where D is related to the applied field by the permittivity of the k

^{th}phase, ε

_{κ}according to the relationship

*D*=

*ϵ*

_{ o }∙

*ϵ*

_{ k }∙

*E*(where

*ϵ*

_{ o }is the permittivity of vacuum), and the charge flux caused by conduction, J, is related to the applied field, E, by

*J*=

*σ*∙

*E*where σ is the conductivity. Neglecting source terms, we model the conservation of charge in the form

*V*it follows that

*e*

^{− jωt }, then Eq. (2) takes on a form that is typically solved by codes such as COMSOL (Ref ?), namely

*σ*and

*ε*are the conductivity and permittivity, respectively, of a given phase,

*ω*is the frequency of oscillation of the electric field, ∇ is the gradient operator, \( j\equiv \sqrt{-1} \),

*P*is the polarization, while

*J*

^{ c }and

*Q*

_{ j }denote source terms. In a series of previous publications, we have applied equation (3) to some of the complex microstructures illustrated in Figure 1 at the conformal level, and have included electrochemical effects on the resulting flux terms shown in that equation. However, in this publication, we are only concerned with constructing a predictive method for time dependent electric charge distribution (including surface charge) in heterogeneous materials at the fundamental level, and the validation of such a method (Liu 2012; Liu 2011a; Liu & Reifsnider 2013; Reifsnider et al. 2013; Baker et al. 2014; Liu 2011b; Raihan et al. 2014). Therefore, without loss of generality for the present discussion, we neglect all source terms and concern ourselves with the following simplified version of Eq. (1):

*d*denotes the thickness of our domain perpendicular to the diagram. There are many forms of the material constants in Eq. (2), but generally they may have both real (in phase) and imaginary (out of phase) components. If we consider a transcendental form for

*φ*and take only the real part of the material property coefficient in Eqs. (2) and (4), one could consider the form

*d*= 1) and the modulus of the transcendental term to be of order one, we simply discuss the design equation

## Results and discussion: forward problem

These forms help us to define the local physics that is to be specified in our design of the material system, and help us to properly set a design analysis. Although we are considering only a balance of charge in the present case, the analysis can be extended to other conservation equations (e.g., for mass, momentum, and energy) to support a multiphysics design with a more general discussion. However, the present discussion will adequately define the methodology for more general applications.

At the most general level, examination of the physical domain in Figure 2 together with Eqs. (1) – (6) indicate that for an applied vector electric field, the slope of the potential in that direction multiplied by a material constant which controls the charge displacement in that material is constant, as we know and expect from electrostatics. So in Figure 2(a) for a uniform material the sensible voltage, *V* , is a straight line across the material. If the material is an ideal conductor, the slope of that line is simply the conductivity of the material, σ, for the static case. However, if the material has some permittivity, the slope of *V* may be greater or smaller at various frequencies of the input field; at very high input frequencies, the system acts like a parallel plate capacitor, as it must, and the slope of the potential (*V*) across the plates approaches zero.

*V*

^{ i }=

*V*

^{ o }and

*σ*

_{ f }denotes the free charge across an interface,

*d*/

*dn*denotes the derivative along the normal to the interface surface, while the superscripts “

*o*” and “

*i*” denote the “outside” and “inside” material regions, respectively. Since we are not concerned with free charge in this paper, we can set

*σ*

_{ f }= 0 in Eq. (7). Using Gauss’s law to calculate the total interface charge density across a heterogeneous material interface yields

As we can see from Figure 2, and as is well known, introducing a dielectric (or conductor) in the region between the “parallel plates” of Figure 2(a) creates a larger change in slope at the boundary and also a surface charge at the interior interface, resulting in an increase in charge storage and resulting capacitance of the domain (Liu 2011b). But the extrapolation of this simple reasoning to the design of more general geometries and, eventually, more complex microstructures is not a straightforward extension of these simple rationales.

If we were to use this analysis approach for design, we could, in principle, “run the forward problem” many times, for many candidate morphologies, and invoke a variety of optimization methods to determine “what the picture should look like”. Although for simple single-physics problems such an approach might be feasible; such an approach for a multi-physics analysis would be akin to the many body problem (which often involves additional nonlinear response associated with coupling, etc.), which would quickly become computationally intractable and unpredictable.

## Results and discussion: design for response

The “direct” or “forward” problem” solves the “parameter-to-output” mapping that describes the “cause-to-effect” relationship in the respective physical process. Taking Eq. (3), for example, the “forward (or direct) problem” consists in solving it subject to appropriate boundary conditions to obtain the solution *V*(**x**), which is in turn used to compute the model responses **r**[*V*(**x**); **α**(**x**)], where **α**(**x**) ≡ [*d*(**x**), *σ*(**x**), *ω*, *ε*(**x**), *J*
^{
c
}(**x**), *P*(**x**), *Q*
_{
j
}(**x**)] denotes the vector of spatially-(**x** -)dependent model parameters. The *necessary and sufficient conditions for the direct problem to be well-posed* were formulated by Hadamard (1865–1963), and can be stated as follows: (i) For each source, *Q*
_{
j
}, there exists a solution *V*(**x**); (ii) The solution *V*(**x**) is unique; (iii) The dependence of *V*(**x**) upon “the data” **α**(**x**) and boundary conditions is continuous. A problem that is not well-posed is called *ill-posed*. In general, two problems are called *inverses* of one another if the formulation of each involves all or part of the solution of the other. Several inverse problems can be formulated, as follows: (a) The classical “inverse source identification problem”: given the responses **r**, the known boundary conditions, and the parameters **α**(**x**), determine the sources *Q*
_{
j
}; (b) “Parameter identification problem”: given the responses **r** and the sources *Q*
_{
j
}, determine the parameters **α**(**x**); (c) When the domain contains inhomogeneous materials, and the responses **r** are given, identify internal boundaries between the inhomogeneous materials, identify the description of the system’s structure (“structural identification”), etc.

The existence of a solution for an inverse problem is, in most cases, secured by defining the data space to be the set of solutions to the direct problem. This approach may fail if the data is incomplete, perturbed or noisy. Problems involving differential operators, for example, are notoriously ill-posed, because the differentiation operator is not continuous with respect to any physically meaningful observation topology. If the uniqueness of a solution cannot de secured from the given data, additional data and/or a priori knowledge about the solution need to be used to further restrict the set of admissible solutions. Of the three Hadamard requirements, stability is the most difficult to ensure and verify. If an inverse problem fails to be stable, then small round-off errors or noise in the data will amplify to a degree that renders a computed solution useless.

*x*= 0 to

*x*=

*d*(considered to be infinite in the

*y*- and

*z*-directions), and having perfectly known constant material properties, in which the potential,

*V*(

*x*), is driven by a spatially varying source,

*Q*(

*x*) ≡

*dQ*

_{ j }. For such an idealized material, Eq. (3) takes on the simple form

*Q*(

*x*) from measurements of the potential,

*V*(

*x*). A measurement would be recorded as a “detector” or “instrument response” that can be represented in the form

*R*

_{ d }(

*x*) represents the detector’s response function. For a known (measured) response value

*M*, it is evident that Eq. (10) represents a Fredholm equation of the first kind for the determination of the spatially-dependent voltage,

*V*(

*x*), which cannot be solved as it stands to produce a unique solution

*V*(

*x*)! Moreover, Fredholm equations of the first kind are notoriously ill-posed, since the integration over the kernel [in this case,

*R*

_{ d }(

*x*)] of the Fredholm equation has a “smoothing” effect on the high-frequency components, and may include cusps, and edges in

*V*(

*x*)

*.*This effect stems from the well-known Riemann-Lebesgue lemma, which for the purposes of this work can be written in the form

It can therefore be expected that the (inverse) determination of *V*(*x*) using the Fredholm Eq. (10) will amplify high frequency components (such as those stemming from measurement errors) in the measured “detector response” *M*.

*V*(

*x*). There are two main classes of methods for discretizing integral equations, namely quadrature methods and Galerkin (which include collocation, spectral, and pseudo-spectral) methods. Consider, for simplicity, that

*C*

_{ d }is an appropriate “measurement conversion function”, so that the detector provides, at any spatial location

*x*

_{ n }, a measurement of the form

*V*(

*x*) must be square integrable, piecewise continuous, and of bounded variation (having at most a finite set of discontinuities of finite magnitudes within the slab). Therefore, the function

*V*(

*x*) must admit a spectral (e.g., Fourier) representation, and the choice of basis-functions can be conveniently adapted to boundary conditions and possible periodicities and/or symmetries inherent in the problem under consideration. For our illustrative inverse problem, we expect to be able to measure

*V*(

*x*) at least at the left and right boundaries of the slab, i.e., at

*x*= 0, and at

*x*=

*d*, respectively, obtaining two values, which will be conveniently denoted as

*M*

_{0}≡

*C*

_{ d }

*V*(0) and

*M*

_{ N }=

*C*

_{ d }

*V*(

*x*

_{ N }),

*x*

_{ N }≡

*d*, respectively.

*From the point of view of the forward problem*,

*these measurements would mathematically provide (two) Dirichlet boundary conditions for V*(

*x*)

*,*namely

*to complement the differential equation (*9

*), thus rendering the forward problem*[namely, to determine the function

*V*(

*x*) when the source

*Q*(

*x*) is known]

*to be perfectly well-posed in the sense of Hadamard*. Actually, the

*unique and exact solution, V*

^{ exact }(

*x*)

*, of the forward problem consisting of Eqs. (*9

*) and (*14

*), is*

*inverse problem*at hand, it becomes clear that the spectral representation shown in Eq. (15) underscores that fact

*the determination of V*

^{ exact }(

*x*)

*would require infinitely many measurements, M*

_{ n }

*, in order to determine all of the coefficients*\( {c}_n^{exact} \). But it is practically impossible to perform infinitely many measurements. In practice, therefore, the determination of the first

*J*coefficients (

*c*

_{1}, …,

*c*

_{ J }) necessitates

*J*measurements of

*V*(

*x*) at locations (

*x*

_{1}, …,

*x*

_{ J }), in order to construct the following system of equations (for determining the coefficients

*c*

_{1}, …,

*c*

_{ J }):

*c*

_{1}, …,

*c*

_{ J }) as the solution of the equation

*coefficients c*

_{ n }

*cannot possibly be determined perfectly*, for at least the following three reasons: (i) it is impossible to perform infinitely many measurements; (ii) the measurements

*M*

_{ j }cannot be performed perfectly, so they will be afflicted by measurement errors; and (iii) inverting the matrix

**S**will introduce additional numerical errors. Therefore, the reconstructed coefficients

*c*

_{ n }will be affected by errors, which can be considered to be additive, of the form

*n*

^{ th }-coefficient, while

*ε*

_{ n }denotes the corresponding error. Hence, the reconstructed potential, denoted here as

*V*

^{ rec }(

*x*), will have the form

The above representation of the potential clearly indicates that its reconstruction from measurements introduces errors over the entire spatial-frequency spectrum. It is especially important to note that the highest-frequency spatial errors cannot be controlled from the “measurement side” since they arise precisely because of the truncation to finitely many terms, which actually stems from the inability to perform infinitely many measurements.

*Q*(

*x*) from Eq. (9) can now be displayed explicitly, as follows. If the exact expression,

*V*

^{ exact }(

*x*), given in Eq. (15) were available, if the model represented by Eq. (9) were perfect, and if the boundary values given in Eq. (14) were perfectly well known, then the exact expression for the source,

*Q*

^{ exact }(

*x*), could be obtained by replacing Eq. (15) into Eq. (9). The expression thus obtained for

*Q*

^{ exact }(

*x*) would be

*n*, to ensure the convergence of the infinite series on the right side of Eq. (20). This property can be readily verified by using Eq. (16) to compute the exact coefficients, \( {c}_n^{exact} \), that would result from various particular forms of the source

*Q*

^{ exact }(

*x*).

*V*

^{ exact }(

*x*), is unavailable! Only the reconstructed potential,

*V*

^{ rec }(

*x*), given in Eq. (20) is available. Replacing this expression into Eq. (9) yields

*ε*

_{ j }are unknown, they are nevertheless some numerical constants, and the crucial fact is that they do

*not*depend on

*n*. It therefore follows that, in the limit of large

*J*, the second sum in Eq. (23) will vanish, but the first sum will diverge to infinity, so that

The above behavior of *Q*
^{
error
}(*x*, *J*) clearly highlights the destructive effect of high frequency errors when attempting to determine the source from flux measurements by using the forward Eq. (3): high-frequency error components arising from the reconstruction of the flux from measurements cause a large deviation between the true source and the source that would be reconstructed from flux measurements. Furthermore, this discrepancy between the true and the reconstructed source is the larger the higher the frequency of the error component in the reconstructed potential from measurements. The fundamental reason for this behavior is that the non-compact Laplace operator the “amplifies” the high-frequency error components if the forward equation is used to reconstruct the source, *Q*(*x*), from measurements of the potential, *V*(*x*).

*approximately*an ill-posed problem such as that described above are called

*regularization procedures*(methods), after the systematic works by Philipps (Phillips 1962) and, who obtained “optimal solutions” by solving the minimization problem for a “cost functional”,

*F*(

*x*), containing user-defined parameters and meant to minimize a user-defined “error”; usually, this minimization problem takes on the form

*β*is a “free parameter” meant to accomplish a “user-defined compromise” between two requirements: (i) to satisfy the model equation

*Ax*−

*d*= 0, and (ii) to be close to the a priori knowledge

*x*

_{0}. A rich literature (too numerous to cite here) of variations on the Tichonov-Philips regularization procedure has since emerged; their common characteristic is the fundamental dependence of the “regularized solution” on “user-tunable” parameters, like

*β*in Eq. (25).

In an attempt to eliminate the appearance of “user-tunable parameters”, Cacuci and co-workers (Barhen et al. 1980) combined concepts from information theory and Bayes’ theorem to calibrate (“adjust”) simultaneously system (model) responses and parameters, in order to obtain best-estimate values for both the responses and system parameter, with reduced uncertainties, with applications to reactor physics and design. Several years later, these methods (Barhen et al. 1980; Barhen et al. 1982) were re-discovered by workers in other fields, e.g., earth and atmospheric sciences (Barhen et al. 1982), mechanics of materials (Bui 1994), environmental sciences (Faragó et al. 2014), etc. We are given a functional result and asked to design the system to make that happen. For large-scale complex systems, it is practically impossible to run all possible cases in the “forward” direction (even with multivariate optimization algorithms) or to solve the inverse problem. Adjoint methods, which stem from Lagrange’s method of “integration by parts” (~1755), and were set on a rigorous mathematical foundation by Hilbert and Banach, were used (already in the 1940s) for solving efficiently *linear problems* in nuclear and reactor physics, and (a decade later) optimal control, by avoiding the need to solve repeatedly forward or inverse problems with altered model parameter values. However, these early adjoint methods were applicable solely to linear problems, since nonlinear operators do *not* admit “adjoints”, as is universally known. Cacuci and co-workers (Cacuci et al. 1980a; Cacuci et al. 1980b) initiated the application of adjoint methods for computing sensitivities of simple responses in simple nonlinear problems. In a remarkable breakthrough, Cacuci (Cacuci 1981a; Cacuci 1981b; Cacuci 1988) developed in 1981 a *mathematically-rigorous “adjoint sensitivity analysis” theory applicable to completely general nonlinear systems*. Since the late 1980s, adjoint methods enjoyed a remarkably fast and wide-spread field of applications, from interpretation of seismic reflection data (Yedlin & Pawlink 2011), to airfoil design (e.g., the Boeing 747 wing, (Kress et al. 1991)), to numerical error control (Kress et al. 1991).

*by developing a unified framework based on physics-driven mathematical procedures founded on the maximum entropy principle, dispensing with the need for “minimizing user-defined cost functionals”*(which characterizes virtually all of the methods currently in use). This fairly self-explanatory framework is depicted in Figure 5, and aims at developing validated predictive computational methods that can be used in the design of multi-phase HeteroFoaM materials. Such methods will be capable of designing not only the constituent materials and their interactions, but also the morphology of the shape, size, surfaces and interfaces that define the heterogeneity and the resulting functional response of the material system. Last but not least, this framework is envisaged to provide the foundation for developing game-changing high-order (to at least fourth-order, including skewness- and kurtosis-like moments of the predicted distributions for design parameters and responses of interest) predictive direct and inverse modelling capabilities, empowered by a new

*high-order adjoint sensitivity analysis procedure*(

*HO-ASAP*) for computing exactly and extremely efficiently (“smoking fast”) response sensitivities of arbitrary order to any and all parameters in large-scale coupled multi-physics models. The high efficiency of the second-order adjoint sensitivity analysis procedure (

*SO-ASAP*) has been illustrated (Cacuci 2014b) via an application to a paradigm particle diffusion problem; a series of papers documenting the

*HO-ASAP*are currently in preparation.

## Conclusions

The present paper has outlined a foundation for developing validated predictive computational methods that can be used in the design of a genre of multi-phase heterogeneous functional materials that we call HeteroFoaM. We have defined and discussed analysis methods that will be capable of designing not only the constituent materials and their interactions, but also the morphology of the shape, size, surfaces and interfaces that define the heterogeneity and the resulting functional response of the functional material system. We have also discussed applications which drive this development. The problem of designing a heterogeneous functional material for specified dielectric performance, specifically addressing the role of space charge at heterogeneous interfaces, was presented as an example of the multi-scale functional behaviour that drives this approach and the methodology discussed.

## Declarations

### Acknowledgements

The authors gratefully acknowledge the support of the broadband dielectric spectroscopy and related work by the Energy Frontier Research Center for Heterogeneous Functional Materials, the HeteroFoaM Center, under DoE Grant no.DE-SC0001061 from the Office of Basic Energy Sciences. Instrumentation and laboratory facilities used in the execution of the reported work are maintained by the South Carolina SmartState Center for Solid Oxide Fuel Cells and the Department of Mechanical Engineering at the University of South Carolina.

## Authors’ Affiliations

## References

- Baker J, Adkins JM, Rabbi F, Liu Q, Reifsnider K, Raihan R, (2014) Meso-Design of heterogeneous dielectric material systems: structure property relationships, Journal of Advanced Dielectrics, 4:1450008Google Scholar
- Barhen J, Cacuci DG, Wagschal JJ, Mullins CB (1980) A systematic methodology for the reduction of uncertainties in transient thermal-hydraulics by using in-bundle measurement data”. ANS Topical Conference 1980 Advances in Reactor Physics and Shielding, Sun Valley, Idaho, pp 156–168, September 14–17, 1980, ANS/70048Google Scholar
- Barhen J, Cacuci DG, Wagschal JJ, Bjerke MA, Mullins CB (1982) Uncertainty analysis of time-dependent nonlinear systems: theory and application to transient thermal-hydraulics. Nucl Sci Eng 81:23–44View ArticleGoogle Scholar
- Bui HD (1994) I
*nverse problems in the mechanics of materials: an introduction*. CRC Press Inc, Boca Raton, USAGoogle Scholar - Cacuci DG (1981a) Sensitivity theory for nonlinear systems: i. nonlinear functional analysis approach. J Math Phys 22:2794–2802View ArticleMathSciNetGoogle Scholar
- Cacuci DG (1981b) Sensitivity theory for nonlinear systems: ii. extensions to additional classes of responses. J Math Phys 22:2803–2812View ArticleMathSciNetGoogle Scholar
- Cacuci DG (1988)
*The forward and the adjoint methods of sensitivity analysis. Chapter 3 in Uncertainty Analysis, Ronen Y (ed)*. CRC Press, Inc, Boca Raton, Florida, pp 71–144Google Scholar - Cacuci DG (2003)
*Sensitivity and Uncertainty Analysis: Theory*. 1. Chapman & Hall/CRC, Boca RatonView ArticleMATHGoogle Scholar - Cacuci DG (2014a) Predictive modeling of coupled multi-physics systems: I. theory. Annals of Nuclear Energy 70:266–278View ArticleGoogle Scholar
- Cacuci DG (2014b) Efficient and exact computation of second-order response sensitivities using adjoint systems: a paradigm illustrative neutron diffusion problem. Trans. Am. Nucl. Soc., Track # 11528, Anaheim, CA, November 9–13Google Scholar
- Cacuci DG, Weber CF, Oblow EM, Marable JH (1980a) Sensitivity theory for general systems of nonlinear equations. Nucl Sci Eng 75:88–110Google Scholar
- Cacuci DG, Greenspan E, Marable JH, Williams ML (1980) Developments in sensitivity theory. ANS Topical Conference “1980 Advances in Reactor Physics and Shielding”, Sun Valley, Idaho, September 14–17, 1980, NAS/70048: 692–704. See also: Cacuci DG, Weber CF (1980) Application of sensitivity theory for extrema of functions to a transient reactor thermal-hydraulics problems. Trans. Am. Nucl. Soc. 34: 312Google Scholar
- Cacuci DG, Ionescu-Bujor M, Navon MI, (2005) Sensitivity and Uncertainty Analysis: Applications to Large Scale Systems. 2, Chapman & Hall/CRC, Boca Raton, Florida, USAGoogle Scholar
- Cacuci DG, Navon MI, Ionescu-Bujor M (2014) Computational Methods for Data Evaluation and Assimilation. Chapman & Hall/CRC, Boca Raton, FloridaMATHGoogle Scholar
- D.G. Cacuci and Mihaela Ionescu-Bujor, “Sensitivity and Uncertainty Analysis, Data Assimilation and Predictive Best-Estimate Model Calibration”, (2010), Chapter 17 in Vol.3, pp 1913 – 2051, Handbook of Nuclear Engineering, D. G. Cacuci, Editor, ISBN: 978-0-387-98150-5, Springer Verlag, New York, Berlin/Heidelberg 2010.Google Scholar
- Faragó I, Havasi A, Zlatev Z (eds) (2014) A
*dvanced numerical methods for complex environmental models: needs and availability*. Publishers, Bentham ScienceGoogle Scholar - Kress R, Lassi P, Arinta A (1991) On the far field in obstacle scattering. SIAM, J. Appl. MathGoogle Scholar
- Liu Q (2011a) Physalis method for heterogeneous mixtures of dielectrics and conductors: Accurately simulating one million particles using a PC. J Comp Phys 230:8256–8274View ArticleMATHMathSciNetGoogle Scholar
- Liu Q (2011b) Physalis method for heterogeneous mixtures of dielectrics and conductors: accurately simulating one million particles using a PC. J Comp Phys 230:8256–8274View ArticleMATHMathSciNetGoogle Scholar
- Liu Q (2012) Directly resolving particles in an electric field: local charge, force, torque, and applications. Int J Numer Meth Engng 90:537–568View ArticleMATHMathSciNetGoogle Scholar
- Liu Q, Reifsnider K (2013) Heterogeneous mixtures of elliptical particles: directly resolving local and global properties and responses. J Comp Phys 235:161–181View ArticleMathSciNetGoogle Scholar
- Phillips DL (1962) A technique for the numerical solution of certain integral equations of the first kind. J Assoc Comp Mach 9:84–97View ArticleMATHMathSciNetGoogle Scholar
- Raihan R, Adkins J-M, Baker J, Rabbi F, Reifsnider K (2014) Relationship of Dielectric Property Change to Composite Material State Degradation, Composites Science and Technology. http://dx.doi.org/10.1016/j.compscitech.2014.09 Google Scholar
- Reifsnider K, Chiu WK, Brinkman K, Du Y, Nakaja A, Rabbi F, Liu Q (2013) Multiphysics design and development of heterogeneous functional materials for renewable energy devices: the HeteroFoaM story.
*J*. Electrochemical Society 160:F470–F481View ArticleGoogle Scholar - Yedlin M, Pawlink P (2011) Scattering from cylinders using the two-dimensional vector plane wave spectrum. J Opt Soc Am A 28(6):1177–1184, doi:10.1364/josaa.28.001177View ArticleGoogle Scholar

## Copyright

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.