Parameter Estimation to Improve Coastal Accuracy in a Global Tide Model

Global tide and surge models play a major role in forecasting coastal flooding due to extreme events or climate change. The model performance is strongly affected by parameters such as bathymetry and bottom friction. In this study, we propose a method that estimates bathymetry globally and the bottom friction coefficient in the shallow waters for a Global Tide and Surge Model (GTSMv4.1). However, the estimation effect is limited by the scarcity of available tide gauges. We propose to complement sparse tide gauges with tide time-series generated using FES2014. The FES2014 dataset outperforms 5 GTSM in most areas and is used as observations for the deep ocean and some coastal areas, such as Hudson Bay/Labrador, where tide gauges are scarce but energy dissipation is large. The experiment is performed with a computation and memory efficient iterative parameter estimation scheme applied to Global Tide and Surge Model (GTSMv4.1). Estimation results show that model performance is significantly improved for deep ocean and shallow waters, especially in the European Shelf directly using the CMEMS tide gauge data in the estimation. GTSM is also validated by comparing to tide gauges from UHSLC, 10 CMEMS, and some Arctic stations in the year 2014.


Global Tide and Surge Model
We use version 4.1 of the Global Tide and Surge Model. It is a depth-averaged hydrodynamic model developed in the Delft3D Flexible Mesh with an unstructured grid (Verlaan et al., 2015;Kuhlmann et al., 2011). The model is forced by the tide generating potential with a full set of tide frequencies. GTSM is a combined tide and surge model to study some events such as the 95 effect of tropical cyclones and sea-level changes on a global scale. Surge is induced by the gradients in the atmospheric surface pressure and the momentum transfer from the wind to the water.
The bathymetry used in GTSMv4.1 is a combination of the General Bathymetric Chart of the Ocean with a 15-arc second resolution globally (GEBCO 2019) and EMODnet2018 at 250m resolution in the European Shelf. GTSM has 4.9 million grid cells with a 25km resolution in the open ocean and 2.5km in the coastal zone (1.25km in Europe). We also make use of a 100 coarser grid version of GTSM (GTSM with the fine grid and GTSM with the coarse grid hereafter). GTSM with the coarse grid has grid cells of 50km in the deep ocean and 5km for the shallow waters, resulting in 2 million grid cells. Higher resolution results in better representation of water levels but longer computation times. The CPU time used by GTSM with the coarse grid is one-third of the fine grid. The coarse grid model is used in the estimation process to reducing the computational cost with the coarse-to-fine strategy. It will be described in more detail in Section 2.2.

105
Tidal energy dissipation, with a total value of approximate 3.7TW, is determined by the bottom friction and internal tide frictions. Two-thirds of it, 2.39TW in GTSM, is generated by bottom friction. GTSM uses a quadratic formulation of velocity and the bottom friction known as the Chézy formula:

Parameter Estimation Scheme
Global tide and surge models can be classified into three groups, empirical tide models, purely hydrodynamic models and models with data assimilation, as shown in Table 1. Some parameter estimation algorithms have been applied to global tide models.
FES2014 use the Spectral Ensemble Optimal Interpolation (SpEnOI) algorithm to estimate the bottom friction coefficient, 125 the internal tide drag coefficient, the bathymetry and the SLA. It leads to an accurate data collection of 34 tidal components. HAMTIDE is a time stepping high-resolution tide model corrected by the variational data assimilation algorithm. TPOX09 is a spectral barotropic tide model which assimilated using a variational method. However, the spectral tide model cannot describe the interaction between different tide components in shallow waters.
In this study, we use the parameter estimation scheme developed in our previous study (Wang et al., 2021b). The basic 130 algorithm is called DUD (Doesn't use derivatives) in the generic data assimilation toolbox OpenDA (Ralston and Jennrich, 1978;ope, 2016). DUD is a Gauss-Newton-like algorithm but derivative-free to solve the non-linear least squares problems.
The cost function between the model output and observations is iteratively reduced with the analyzed parameters. To estimate the high-resolution global model in an efficient way for computational cost and memory usage, as well as improving estimation accuracy, three implementations were proposed based on this algorithm in our previous study (Wang et al., 2021b):
A coarse-to-fine strategy with the Coarse Incremental Calibration approach is used in the estimation process. It replaces the increments between the output from the initial model and the model with modified parameters using a coarser grid, as the equation: where H c , H f are the model output from the coarse and fine grid GTSM, x b is the initial parameter set, and x is the adjusted parameter set in each analysis step. Thus, the fine model only simulates with the initial parameter set and is replaced by a coarse model in the iterations, leading to a reduction of 70 % CPU time for each model run.
-Memory requirement reduction: POD based time pattern order reduction 145 Parameter estimation benefits from a long simulation time, but the dimension of model output for all the ensembles also increases with longer time series. Model order reduction is a valuable technique to reduce the high dimension system with a smaller linear subspace. We project on the empirical time patterns to reduce the model output time series to a much smaller dimension. It has the advantage that the simulation length is not restricted by the Rayleigh criterion, which normally requires yearly tide simulation. As a result, the memory requirement is reduced by an order of magnitude in 150 the parameter estimation procedure with negligible accuracy loss.
-Outer-loop iteration for nonlinear parameter estimation Since a coarse grid model is used for the estimation iteration, we developed an outer-loop, similar to the Incremental 4D-Var described by Trémolet (2007). The inner-loop optimizes parameters using the coarse grid GTSM with the DUD 155 algorithm. The outer-loop updates the initial model output from the fine grid model with the optimized parameters and restarts the next inner-loop. The application of outer-loop can improve the calibration performance for this non-linear model or approximate linearization.
By applying these three implementations, the parameters in GTSM can be estimated in a computation-efficient and lowmemory used manner and the estimation results in a higher accuracy of tide forecast. In this approach the assimilation output 160 is fully consistent with a forward model run that uses the estimated parameters. This allows for the use of these estimated parameters in other set-ups of the model, for example including surge or sea level rise.

Parameters to Estimate
An estimation for a global tide model must consider the parameters in the deep ocean and shallow waters together. In the 165 deep ocean, bathymetry and internal tide friction are two parameters affecting the model performance. Seafloor bathymetry is of fundamental importance in many aspects of the earth, such as affecting ocean circulation and mixing. However, large parts of the global oceans remain unsurveyed. For example, Wölfl et al. (2019) reported that only about 15 % of global bathymetry datasets are based on actual data. Thus, it creates significant uncertainties and affects the sea level simulation.
Internal tidal friction is a term related to tide energy dissipation in deep oceans, especially generated in the areas such as 170 mid-ocean ridges with steep bathymetry changes. In our previous study (Wang et al., 2021b), we tested the sensitivity of bathymetry and internal tide friction term for the deep ocean by comparing the relative changes of the cost function when perturbing a specific parameter. It shows that bathymetry perturbation results in larger changes to water level than the internal tide friction term. Therefore, we only optimize the global bathymetry for the deep ocean.
In shallow water, bottom friction is also a main energy dissipative process. Figure 1a illustrates the global tide energy 175 dissipation distribution by bottom friction term. The regions in Figure 1b are defined the same as in Egbert and Ray (2001).
The total tide energy dissipation in the initial GTSM is 3.77TW, 2.39TW from bottom friction, and 1.37TW from the internal tide friction. The top values are for the Hudson Bay, the North West Australian Shelf and the European Shelf, as Figure 1b shows. We propose to estimate bottom friction only in the shallow water regions with large bottom friction energy dissipation.
It is impractical to estimate the bathymetry and the bottom friction coefficient for all the grid cells because of the limited 180 observations and it would also computational demand, and memory requirement. To reduce the parameter dimension, we divide the global ocean into 110 subdomains for bathymetry estimation and define the correction factor for each subdomain

Observation Network
Global tide data from the FES2014 dataset and several global or regional tide gauge datasets were collected as observations in the calibration process.
-FES2014 dataset The FES2014 dataset contains 34 tidal constituents from the FES (Finite Element Solution) tide model that assimilates 190 altimeter time series and tide gauge data (Carrere et al., 2013;Lyard et al., 2021). FES2014 data has higher accuracy than GTSMv4.1 in the deep ocean when compared with the Deep-Ocean Bottom Pressure Recorder data (Wang et al., 2021b).
Moreover, FES2014 data is distributed on a regular 1/16 • grid and time-series can be derived at arbitrary locations globally. Therefore, the dataset is selected to use as observations for the deep ocean to estimate bathymetry correction.

195
-Tide gauge data -UHSLC dataset: UHSLC (University of Hawaii Sea Level Center) dataset (Caldwell, 2010) contains water levels from 500 globally distributed tide gauges. The number of available locations varies in time. Stations in the UH-SLC dataset are irregularly distributed, and most of the gauges are in coastal regions. We use the research quality controlled dataset, considered science-ready data.

200
-CMEMS dataset: CMEMS (Copernicus Marine Environment Monitoring Service) dataset has a collection of insitu tide gauges located in the Arctic Ocean, Baltic Sea, European North-West Shelf Seas, Iberian-Biscay-Ireland regional seas, Mediterranean Sea, and Black Sea. All the available data are published after data acquisition, data quality control, product validation, and product distribution. CMEMS data contains data for the European Shelf 205 and is suitable for local bottom friction coefficient estimation.
-Arctic tide gauge data with four major constituents: Kowalik and Proshutinsky (1994)  In this study, GTSM is simulated to calibrate tides only. Firstly, we generate about 4000 time series from the FES2014 dataset to ensure enough observations for estimating bathymetry. These observations are evenly distributed and located in the deep 215 ocean with a depth larger than 200m. Moreover, tide analysis is performed on the CMEMS and UHSLC tide gauge data on the year 2014 with the TIDEGUI software, a matlab implementation of Schureman (1958) and visual inspected of tide and surge representations. After the analysis and quality control, we obtained 237 locations in the UHSLC dataset and 297 locations from the CMEMS dataset.

220
Even though we obtained three collections of tide gauges, the observation is still quite sparse in some coastal seas. Therefore, we first investigate how to make use of the available data with the consideration of the model performance and parameter sensitivity.

Model and Observation Accuracy Analysis
To our knowledge, the FES2014 dataset is very accurate in the deep ocean (Stammer et al., 2014;Wang et al., 2021b) while 225 along the coast, tide gauge data can be more trustworthy. However, tide gauges data are distributed irregularly. We propose to use a combination of the FES2014 dataset and tide gauge data in the shallow water. The first step is to analyze the accuracy of the FES2014 dataset and the initial GTSMv4.1 comparing with the tide gauge data.
Tide analysis is performed with the TIDEGUI software for the water level representation from GTSM in the year 2014.
Root-mean-square (RMS) that describes the difference between model output and observations for tidal components is applied 230 with the formula: A m and A o are model output and observation amplitudes, φ m , φ o are for the phase lag. ω is the tide frequency. The overbar shows the averaging over one full cycle of the constituent (ωt varying from 0 to 2π) in all locations. We also use Root-Sum-Square (RSS) to describe the Root Square Sum of RMS for the listed major tidal constituents. To facilitate comparison, we use 235 the same formulas for RSS and RMS as in Stammer et al. (2014). Table 2 illustrated the Root-sum-square(RSS) and RMS of eight major tide components between FES2014 and initial GTSM with the tide gauge data. The RSS is calculated for all the eight components in all locations. Comparing with the UHSLC dataset at the globe, FES2014 is more accurate than GTSM for all of the eight components, implying generally FES2014 dataset can provide better tide representation in the shallow water than GTSM. This conclusion is also supported by the comparison with 240 the stations in the arctic ocean. Figure 2 shows the spatial distribution of RSS for each location, which shows that with a few exceptions FES2014 is more accurate. In the European Shelf, GTSM has the RSS of 19.15cm when comparing with CMEMS dataset, which is even smaller than the FES2014 dataset with the RSS of 20.42cm. This also can be observed from the RMS of the N2,M2,S2,K2 constituents.
However, from the spatial distribution of RSS for each stations shown in Figure 2c,2d, FES2014 outperforms in most of 245 the CMEMS locations but provides poor results in a few stations. These result in a larger RSS for FES2014 than GTSM. A possible reason is these tide components obtained from FES2014 is calculated by interpolating the gridded FES2014 dataset to the observation locations, resulting in some errors. GTSM has a higher resolution in the European Shelf, contributing to better results in those locations with complex bathymetry.
In general, FES2014 outperforms GTSMv4.1 in the shallow waters before calibration. Therefore, we will select FES2014 250 for calibration in the those areas where tide gauge stations are sparse. In the following, we use the FES2014 dataset in the deep ocean and CMEMS data in the shallow waters for the calibration. In addition, FES2014 is also included to support the shallow waters where without tide gauges. UHSLC and arctic stations are used for model validation.

Subdomains of Constant Bottom Friction Coefficient
The bottom friction coefficients in the regions with large tide energy dissipation (see Figure 1b) have to be estimated. We define    Subdomains are shown in the red boxes of Figure 3.
The available observations are from the arctic stations but only include four major tidal components. In theory, harmonic tide analysis can be performed for the model output and it is possible to estimate parameters with the model output in the form 265 of tide components, but accurate tide analysis needs a time series of a year, which would increase the computation time needed for estimation by more than 10 times. In (Wang et al., 2021b), we showed that an accurate estimation can be performed with a full time series of 1 month, so this would increase run times by a factor of 12. This is not feasible for us at the moment.
Therefore, we select to use the model output of time series, and these arctic stations can be utilized for the model validation.
To obtain sufficient observations, we propose to generate more observations from the FES2014 dataset because FES2014 270 dataset outperforms GTSM. Figure 2e illustrates the RSS (Root Sum Square) of four major tidal constituents between tide gauge data in the Arctic Ocean and the FES2014 dataset. The RSS difference between GTSM and FES2014 dataset (RSS between GTSM and tide gauge data -RSS between FES2014 and tide gauge data) varies for each location and FES2014 has smaller RSS than GTSM in most of the locations, especially in the Canadian archipelago regions (Figure 2f). The RSS of four major tidal constituents for all the locations in the FES2014 dataset is 21.81cm, while it is 27.82cm for GTSM. Errors 275 are typically larger near the coast. Performance of FES2014 at the arctic stations is better than GTSM before the calibration.
We expect the accuracy of FES in open water to be even better. Therefore, we propose to use the FES2014 dataset to as the observations in the Hudson Bay region. As a result, 61 equally distributed time series are generated as on the locations in   Figure 4b) and 95 points for validation (green points in Figure 4b).

Experiment Design
The experiment is set up to investigate the performance of GTSM after the estimation of bathymetry and bottom friction.
GTSM is simulated with tide only because no surge data is available in the deep ocean. In addition, the surge is not sensitive to the bathymetry (Wang et al., 2021b) and has to be adjusted for itself. The improvement of tide representation in this study can also benefit the accuracy of the total water level. For the estimation runs, we selected a period of one month, September 2014, 315 which we believe is sufficient for tide calibration when using high-frequency time series with 10 minutes sampling (Wang et al., 2021b). To make this possible, meteorological and long-period signals have to be reduced as much as possible. We made model runs without atmospheric forcing and removed the SA and SSA tidal potential. These constituents were also removed from the FES2014 and tide gauge tide series to keep the comparison consistent. We defined several constraints in the optimization process to ensure that the adjusted parameters are realistic. The uncertainty for bathymetry correction factor is set to 5% and for bottom friction coefficient to 20%. Initially, each parameter is perturbed one by one with the uncertainty value to obtain the model output for each ensemble. The same values are also used for a weak constraint adding to the cost function as the background term. It defines the difference between the initial and adjusted parame- is a transition zone between each subdomain to avoid a sudden change in the correction factor from one subdomain to another.
The correction factor in the transition zone is generated by automatic linear interpolation.  Figure 6a illustrates the cost function changes for each iteration in these four outer loops. The first 130 iterations in each loop perturb parameter one by one; parameters are iteratively updated after that until reaching the stop criteria. Optimized parameters in this outer loop will be used as the initial parameters to start the next loop. The estimation experiment was performed with 200 cores, 9 cluster nodes, running for about 16 days, with a total cost of approximately 76800 CPU core hours.
The cost function in the experiment started from the value of 1.96 × 10 7 . It is sharply reduced in the first outer loop to the 350 value of 6.40 × 10 6 , resulting in a reduction of 67.3%. The decrease of the cost function in the second to fourth outer loop is slight and converged in the fourth loop with the value of 5.58 × 10 6 . Finally, the cost function is reduced to 28.5% of the original. The relative changes of bathymetry and bottom friction coefficient is shown in Figure 6b, 6c. After the estimation, the total tide energy dissipation is reasonable with a value of 3.77TW.
The average spatial RMSE between model output and observation in September 2014 is summarised in Table 3. Compared 355 with the FES2014 dataset, the spatial average RMSE is sharply reduced to 47.6% after the estimation, from 5.19cm to 2.47cm.
The total reduction is significant in the first outer loop and slight in the second to fourth outer loops. It is observed that in the Arctic Ocean, the initial RMSE with the value of 11.03cm is larger than other regions. It is observed that in the Arctic Ocean, the initial RMSE with the value of 11.03cm is larger than other regions. It is expected because we added more observation points in the Hudson Bay/Labrador. This area is more shallow with large tide amplitudes, resulting in larger RMSE than other 360 regions. Therefore, the comparison here includes the observations located in the deep ocean and shallow water together.
The outer loop iterations provide more improvement in the Arctic Ocean than in other regions. A possible explanation is that parameter estimation impacts areas with large disagreement against observations most because they still have room to improve, and non-linear effects become more likely. In Europe, GTSM shows significant improvement compared to CMEMS tide gauge data for calibration and validation, reduced to 63.9% and 69.4%, respectively. The difference between model and 365 UHSLC data is significantly reduced in the first outer loop and finally decreased to 75.7%. This decline is smaller than that in CMEMS data for two reasons. One is we do not include the UHSLC data in the estimation process. Secondly, many shallow waters where the UHSLC tide gauges are located are not defined for the bottom friction coefficient estimation. For example, only two tide gauges are available in the Arctic Ocean, and no stations are in the Hudson Bay area.
The spatial distribution of RMSE for estimated GTSM and the RMSE difference between the initial and estimated model Compared with the CMEMS dataset in Figure 7g and 7h, the parameter estimation brings a large improvement to the European Shelf, with the RMSE reduced from 17.60 cm to 11.25cm. This demonstrates that the direct use of tide gauge data in the estimation can improve model performance in shallow waters. Figure 7c and 7d also illustrates that RMSE between the 380 model and the UHSLC dataset is decreased by a small amount. Figure 7e and 7f reports the comparison with the UHSLC dataset for the Australian shelf, where we defined several subdomains for bottom friction estimation. Even though the subdomains here are not as detailed as in the Hudson Bay and the European Shelf, the RMSE is also greatly reduced after the calibration in most of the tide gauges.
In general, GTSM after the parameter estimation showed significant improvement in September 2014. The joint estimation 385 of bathymetry and bottom friction gives significant improvements than for estimation of bathymetry only (Wang et al., 2021b).
GTSM benefits from estimating the bottom friction coefficient, especially in the Hudson Bay/Labrador and the European Shelf. The combination use of FES2014 and tide gauge data offsets the scarce supplies of observations in the shallow water and improves model skills after the parameter estimation. The direct use of tide gauge data provides excellent agreements between the observation and model output after the estimation.

Model Validation in the Year of 2014
In this section, we validate the GTSM with the FES2014 dataset and tide gauge data for the whole year of 2014, both in the time and frequency domains.  Table 7.

400
Model performance in the shallow water is compared with the CMEMS and UHSLC tide gauge data in Figure 9. CMEMS data in Figure 9a includes all the stations for calibration and validation. The average spatial RMSE for the year 2014 in the initial model is 16.7cm. After the first outer loop estimation, a large reduction is achieved to a value of 12.38cm. Accuracy is further improved due to the outer loop iteration. Finally, the RMSE is reduced to 66.5%. The direct use of CMEMS tide gauge data for calibration of bottom friction coefficient effectively reduces the model error that came from parameter uncertainty and 405 results in high accuracy tide representation in the shallow waters.
In this study, UHSLC tide gauge dataset is only used for validation ( Figure 9b). Most improvements are achieved in the first outer loop and small changes in later outer loop iterations. It shows that the calibration also has better agreements in shallow waters outside Europe. But because many of the stations are not in the estimation subdomains we defined, the improvement is limited.

410
To summarize, the estimated GTSM has excellent agreements with observations in the deep ocean and shallow waters compared with data in the time field. The estimation results are not over-fitting the simulation period. The direct use of tide gauge data for estimation plays a substantial role in shallow waters (here is in the European Shelf). Using the FES2014 dataset to replace tide gauges in the coastal also improves model accuracy. Model performance still can be analyzed by comparison with the tide gauges in the frequency field.

Comparison of Tidal Constituents
To further analyze the model performance of GTSM before and after the estimation, we perform a harmonic analysis for the year of 2014. in the M2 tidal constituent. RMS of tidal constituents M2, S2, K1, and O1 in the Arctic Ocean are greatly larger than other regions before and after the estimation. This can also be observed from the spatial distribution of the amplitude and phase of ( Table 3 in Stammer et al. (2014)). After the estimation, the RSS of GTSM is reduced to 2.83cm. Even though it is still not 435 as accurate as FES2014 or other assimilative tide models (Table 3 in Stammer et al. (2014)), but it is excellent compared to the non-assimilative models. In addition, GTSM, like the non-assimilative models, can be used in scenario studies, such as studying climate change. To analyze GTSM performance in shallow waters, we summarized the RMS of major tide components with the comparison of tide gauge data in Table 6. Tidal components from the FES2014 dataset have been evaluated in the tide gauge locations in 440   Table 2. After the estimation, the RSS of GTSM is reduced by 16% of the initial GTSM, from 17.03cm to 14.36cm. However, the error is still larger than in the FES2014 dataset with the value of 12.98cm in Table 2. It is expected because we use the FES2014 dataset as the observation for some coastal regions, and the observation error limits the estimation accuracy to some extent.
Compared with the CMEMS dataset (all locations in the calibration and validation subsets), the RSS of all eight components 445 is reduced from 19.15cm to 12.74cm. Moreover, after the estimation, model errors have the largest reduction in the European Shelf compared with CMEMS than other regions compared with the UHSLC dataset and arctic stations. These results also demonstrate directly assimilating tide gauge data can significantly improve the accuracy of tide representation in models.
In the Arctic Ocean, we analyze the four major tide components from arctic stations and GTSM. When comparing with the FES2014 dataset in the Arctic Ocean (Figure 8a), model error is significantly decreased in every outer-loop iteration. To assess 450 the model performance in each iteration, we reported results with the comparison of arctic stations in four outer-loops in Table   6. RMS is reduced after the first outer loop, especially for the M2 component, resulting in the value of 22.24cm. It is close to the accuracy of the FES2014 shown in Table 2. However, the total accuracy in the second to fourth outer loop is not further improved. M2 constituent becomes a bit worse, but other tide frequencies are improved. This is contrasted with we observed from the Table 3 and Figure 8a of the comparison with FES2014 data in Arctic Ocean. In Table 3, the RMSE of 196 time series 455 in the Arctic Ocean derived from the FES2014 dataset is reduced step by step with the implementation of outer-loop iterations.
Model output is continuously close to the FES2014 dataset in this process, but there are no significant improvements to the Arctic stations from the outer-loop iteration. This is because, firstly, most of the arctic stations are located in the Canadian archipelago, not the Hudson Bay. In addition, there are still observation errors in FES2014 even though FES2014 provides higher accuracy than the initial GTSM. Estimation leads the results closer to the FES2014 but does not mean constantly closer 460 to the Arctic Stations because of the observation error in FES2014 and the uncertainties with the arctic stations. The spatial distribution of RSS for each station is illustrated in Figure 11. We can observe that error of GTSM after estimation is smaller than before (Figure 11a-c). However, the estimated GTSM does not surpass the accuracy of the FES2014 dataset ( Figure   11d), which we also did not expect. Therefore, it is concluded that the observation error significantly influences the estimation accuracy. In addition, stations in Norway seem to get worse (Figure 11c), which is inconsistent with CMEMS data.

465
In summary, model assessments from the time and frequency fields demonstrate that the parameter estimation of bathymetry and bottom friction coefficient combined with the FES2014 and tide gauge data as observation can significantly improve the tide representation in the deep ocean and shallow waters.

Conclusions
This study presents a study about the joint estimation of bathymetry and bottom friction coefficient, for a Global Tide and Surge parameter affecting model performance at the worldwide scale (Wang et al., 2021b), and the bottom friction term influences the tide representation in areas with significant tide energy dissipation (shallow/coastal areas). The FES2014 dataset, with higher accuracy than the initial GTSM in the deep ocean, is used for calibration in this paper. It plays a vital role in correcting the bathymetry factor in the oceans domain we defined. To ensure that the estimation for bottom friction coefficient is feasible, 475 we propose a combination of FES2014 and tide gauge data for the estimation of bottom friction in shallower coastal waters.
Applying this parameter estimation significantly improves the tide representation of GTSM almost everywhere around the globe.
The Hudson Bay/Labrador Sea and European Shelf are the regions with the largest tide energy dissipation. The bottom friction coefficient in the European Shelf is optimized with the tide gauge data from the CMEMS dataset. This results in the 480 largest improvements of tide accuracy for shallow waters. We refined the observation locations from the FES2014 dataset in the Hudson Bay and Labrador sea. This approach is based on the condition that data of Arctic stations only have four major tide components that cannot be used for calibration, and FES2014 has higher accuracy than initial GTSM when comparing against these stations. After estimation the accuracy of GTSM is close to that of FES here. Moreover, some other coastal areas with large energy dissipation are estimated by including more observation located in the depth between 50-200m from FES2014 485 dataset because the numbered UHSLC tide gauges are too few to be used for calibration directly in many regions. After calibration, GTSM has smaller disagreements than initial model but not as accurate as the FES2014 dataset when comparing