Wind-induced waves affect the propagation of gravity waves. Wave modeling hindcast is a critical step for wave energy development and…
Recent Blog Posts
Click to read the FAQs. For additional questions and information, please contact us at [email protected]
You can begin your purchase process by selecting an option above, and proceed through the secure Stripe checkout process. However, if you need to process through a purchase order or need other assistance in purchase, email us at [email protected] We can accept payment made via check, electronic check, wire transfer, debit card, or credit card.
If you experience any issues with online payment, email us at [email protected]
or call us at +1-425-728-8440 between the hours of 9am and 5pm PST Monday – Friday.
In order to cancel, please email us at [email protected]
If you have been charged accidentally, please let us know.
If you’ve purchased a subscription online, you will be charged at the end of your license term for renewal and your license will be updated automatically.
Although we have priced our subscription packages to be the best value for our users, we know that perpetual licenses may still be required by some organizations. We offer EEMS in perpetual license format and these can be upgraded after a period of time.
The price of maintenance is 20% of the current license cost, per year of maintenance required.
We distribute EEMS electronically as a download from this site to the user’s registered email address. Once we receive payment, we will send you your activation key, and installation instructions. You can then immediately begin using EEMS.
If you have not received your license key within 24 hours of purchase, please email us.
Modeling in EE
A checklist for elements to be considered is provided a part of our How-to-Guidance.
Step-by-step guidance in building models is provided here as part of our How-to-Guidance.
With EFDC+ you can no longer need to pause a model run. You can simply refresh EE and view the latest results.
The EFDC Model
DSI have developed their own modified and improved version of EFDC called EFDC+ (which includes what was formerly EFDC_DSI). Apart from hundreds of vital bugs fixes, the EFDC+ version applies dynamic memory allocation so that the user doesn’t need to manually update the array sizes for each model. Moreover, a number of significant new modules have been added including internal wind waves, Lagrangian particle tracking, oil spill, ice formation and melt, and the SEDZLJ sediment flume model. MPI and OMP multi-processing have also been integrated into EFDC+ which provides significant increase in speed. More detailed information is available on our blog post here.
DSI has developed an improved version of the EFDC+ code to help deal with pressure gradient errors that occur in simulations that have steep changes in bed elevation. This new version of the code is called the Sigma Zed code and is implemented in EFDC+. This contrasts with the conventional EFDC+ code which uses a sigma coordinate transformation in the vertical direction and uses the same number of layers for all cells in the domain. See more information on a blog post here.
The Testcases section of our web page has some small scan flume examples. We have used it to duplicate a number of published papers. That said, EFDC+ uses a hydrostatic approximation for the vertical flows. Therefore, EFDC+ is not to be used in cases where there is significant vertical accelerations. High energy gradients and high velocity gradients are not particularly a problem.
Yes, EFDC+ has been used in several real-time systems that DSI have developed. Examples are described in our Solutions page here.
The primary beneﬁt of MPI-based algorithms is they can be run on a cluster; this is in contrast to multi-threaded programs based on something like OMP, which on their own are limited to running on a single node/computer. Increased computational performance now primarily comes from having additional cores available in a given computing environment and writing software with parallelization in mind.
OMP can still be efficient for multiple constituent simulation (e.g. WQ) or when running an EFDC model with a small number of constituents (e.g. only hydrodynamics). MPI is much more efficient, however, with a large number of grid cells where the domain can be split up and handled by separate MPI nodes.
The only requirement to run an EFDC+ model with MPI implementation is to have multiple cores available on the computer running the model. The conﬁguration of that computer is of importance to understand run times and the maximum attainable performance. Most desktop computers have CPUs with multiple cores, which will allow MPI applications to run on that single computer. However, if it desirable to run a problem on more cores than are available on a single computer, this naturally requires several computers to be clustered. To better understand MPI, read these guiding principles for for setting up an MPI run.
Yes, MPI can run on an PC with more than four cores. However, efficiency gains will only be attained if you can divide your model into a greater number of cores. For some models running in OMP mode, using only one domain would be more efficient. To see how to run MPI on your PC or laptop see our guidance here.
Yes. For example, DSI recently configured a cluster on AWS using their Parallel Cluster service (Services 2020a). This allows a remote (cloud based) cluster to be conﬁgured on demand. A cluster containing four compute nodes and one head node was set up. Each compute node contained a 36-core Intel Xeon Platinum 8000-series CPU. To minimize communication latency between nodes, a high-speed interconnect was set up using the Elastic Fabric Adapter (EFA) interface. A high-speed ﬁle system based on FSx for Lustre was used to minimize time spent writing to the disk during a simulation. For detailed guidance on how to set up your own system refer here.
Yes. The recommended approach is to first build your model on a Windows machine running EEMS and then transfer the model input files to the Linux system. The Linux system will have a different EFDC+ executable than Windows but uses the same input file names and file extensions. The output generated from Linux can be copied back to the Windows machine and viewed in EFDC_Explorer depend to a significance extent on how the PCs are connected. If the interconnect between the machines is slow then some of the benefit of MPI will be lost.
No, you only need to install EEMS on one machine. This is the master machine. Other machines, the slave machines, need to have the Intel MPI Library for Windows installed and share the working directory and the model and EEMS installation shared with them.
Recommendations for hardware are provided here.