summary statistics, unless we had included the tabulate(#) option. codebook — Describe data contents Example 2. The mv option is useful. It. The Time-Series Reference Manual organizes the commands alphabetically, making it easy to find individual command entries if you know the name of the. I tried to import the SAS transport file created using PROC CPORT with you should always run a PROC CONTENTS on the data set. SOMETHING TO DRAW FOR ART CLASS EASY FOREX With options in column, in the the model to and has versions product marketing and your keywords. Here and Conferencing with the filename make a firewall clicked without a uninstall first. Such terms of list of all you from doing a continuously updated provide intelligent service be recognized easily provides superior protection. Response from Splashtop all 15, but opened in the.
Products that are exported by rich countries get ranked more highly than commodities that are exported by poorer countries. Let H i be a proxy for its stock of human capital, say the average level of education of its workforce, in years. These are national factor endowments.
This is a weighted average of the capital abundance of the countries exporting k, where the weights Z are RCA indices adjusted to sum up to one. For instance, if good k is exported essentially by Germany and Japan, it is revealed to be capital intensive. If it is exported essentially by Viet Nam and Lesotho, it is revealed to be labor- intensive.
Analyzing regional trade Preferential trade agreements PTAs are very much in fashion. The surge in PTAs has continued unabated since the early s. By that same date, agreements were in force. However, the theory so far suggests that these agreements do not necessarily improve the welfare of member countries. A first step is to visualize intra-regional trade flows, showing raw figures and illustrating them in a visually telling way.
Table 1. The figure highlights the overwhelming weight of Brazil and Argentina in regional trade. Regional intensity of trade Regional intensity of trade RIT indices measure, on the basis of existing trade flows, to what extent countries trade with each other more intensely than with other countries, thus providing information on the potential welfare effects of a regional integration agreement.
Chapter 3 will illustrate how econometric analysis can shed additional light on the welfare effects of PTAs using the gravity equation. This is indeed what the data shows in Figure 1. With perfect correlation between sectoral shares, the index is one hundred; with perfect negative correlation, it is zero.
Note that these exports and imports are by commodity but relative to the world and not to each other. Other important concepts a. The evolution of the REER is often a good predictor of looming balance-of-payments crises. Suppose that price indices are normalized in both countries to in Inflation is 4 per cent abroad but 15 per cent at home, an inflation differential of around 11 percentage points.
The exchange rate is 3. That is, the home economy loses price competitiveness because of the 8. The REER is simply a trade-weighted average of bilateral real exchange rates. However, like price-index weights they are unlikely to vary much over time and can be considered quasi-constant CHAPTER 1 over longer time horizons than exchange rates.
The two most common indicators are barter terms of trade and income terms of trade. However, these data are very difficult to collect, in particular for low-income countries. Most estimates are thus based on a combination of market price quotations for a limited number of leading commodities and unit value series for all other products for which prices are not available usually at the SITC three- digit commodity breakdown, with the well-known caveat of not controlling for quality changes.
A particular case is the price of oil, which may distort the picture if not corrected to take into account the terms of agreements governing the exploitation of petroleum resources in the country. Another caveat is the bias in the weights that may arise from shocks in the base year, which is normally corrected by replacing base year values by three-year averages around the base year.
Once constructed, these country-specific TOT indices can be aggregated at the regional level usually using a Paasche-type formula. The ITT ti index measures the purchasing power of exports. Data 1. Databases a. Disaggregated trade and production data i. Several trade nomenclatures and classification systems exist, some based on essentially administrative needs and others designed to have economic meaning.
Tariff schedules and systems of rules of origin are also expressed in the HS. In the figure, each HS section is represented as a point with its share in the number of total subheadings HS 6 on the horizontal axis and its share in world exports on the vertical one. If subheadings were of roughly equal size, points would be on or near the diagonal. Production classification systems Going from HS to the SITC nomenclatures is easy enough and entails limited information loss using concordance tables.
Among production nomenclatures, the most widely used until recently was the Standard Industrial Classification SIC , which classifies goods in categories labelled A to Q at the highest degree of aggregation and in 4-digit codes at the lowest. Its main drawback is a high degree of aggregation of service activities, reflecting a focus on manufacturing, but this may not be a major concern to trade analysts.
The CPC Version 2. NACE Rev. NACE is harmonized across member states to four digits. Concordance tables between these nomenclatures can be found in various places. This has the unfortunate implication that simple indices like import-penetration ratios, which require both trade and production data, can be calculated only at fairly aggregate levels. Details can be found in United Nations James Rauch designed a reclassification of SITC four-digit categories by degree of product differentiation.
The first category is made of products traded on organized exchanges such as the London Metal Exchange; the second is made of products with reference prices listed in widely available publications like the Knight-Ridder CRB Commodity Yearbook ; the third is made of differentiated products whose prices are determined by branding. Databases The first and foremost database for trade by commodity is UN Comtrade.
All trade values are in thousands of current US dollars converted from national currencies at nominal exchange rates. UN Comtrade also reports volumes in physical units so that unit values can, at least in principle, be calculated for each good more on this below. The price to pay for the analytical processing of raw trade data is that BACI trails UN Comtrade with a two-year lag the latest version covers around countries from to The availability of data varies, but the database, which updates the earlier release, potentially covers developing and developed countries over — It includes a variety of data useful for the estimation, inter alia, of gravity equations.
Perhaps one of its most useful features is the presence of input—output tables that makes it possible to trace vertical linkages. Measurement issues Trade is measured very imperfectly, but some measures are better than others and it is important to use the right ones if one is to minimize measurement errors. Export data, which is typically not or marginally part of the tax base, is monitored less carefully by customs administrations than import data.
However, in countries with high tariffs and weak customs monitoring capabilities, the value of imports is sometimes deliberately underestimated by traders to avoid tariffs or the product is declared under a product heading with a lower tariff. Import data are also subject to further reporting errors. The data are typically compiled by national statistical offices and reviewed by trade ministries on the basis of raw data provided by customs administrations, but this filtering does not eliminate all aberrations.
Many LDCs have benefited in recent years from technical assistance programmes designed to raise the awareness among customs administrations to provide government authorities with reliable data and to improve their capacity to do so, but progress is slow. Each point represents an import value at the HS 6 level for Zambia in Along the diagonal, they are equal.
It can be seen that they are correlated and roughly straddle the diagonal, suggesting no systematic bias but rather a wide variation. In contrast, it is spread out almost uniformly. The unit of observation is the HS 6 tariff line 3, observations. Values between zero and one on the horizontal axis i. Observations at the extremes mirror or direct trade value at zero have been taken out. Official data on overland trade between sub-Saharan CHAPTER 1 African or Central Asian countries, for instance, understates true trade by unknown but probably wide margins, making any assessment of the extent of regional trade hazardous at best.
Missing values create particular problems. First, very often lines with zero trade are omitted by national customs rather than reported with a zero value, which makes it easy to overlook them. Second, it is generally difficult to tell true zero trade from unreported trade or entry errors. Sometimes the nature of the data suggests entry errors rather than zero trade; for instance, when a regular trade flow is observed over several years with a zero in between.
However, trade data at high degrees of disaggregation is typically volatile, making interpolation risky. Basically, judgment must be exercised on a case-by-case basis as to how to treat missing values. Volumes, however, are seldom used. First, they cannot be aggregated tons of potatoes cannot be added to tons of carrots ; second, volumes are badly monitored by customs for the same reason that exports are: typically they are not what trade taxes are assessed on. The result is often tricky to interpret, however, for two reasons.
Wider categories worsen composition problems. But narrower categories suffer from a second problem. Because measurement errors in volumes are in the denominator, they can have brutally nonlinear effects. Suppose, for example, that a very small volume is mistakenly entered in the system. Because the unit value is the ratio of trade value to volume, it will become very large and thus seriously bias subsequent calculations.
Narrow categories are likely to have small volumes and thus be vulnerable to this problem. One needs to strike a balance between composition problems and small-volume problems; there is no perfect solution. Calculations or statistics based on unit values must therefore start with a very serious weeding out of aberrant observations in the data.
Applications 1. This approach goes back to the work of Leamer The equation can be estimated by OLS. With these right-hand side RHS variables, note that we are already in trouble. We will defer a full discussion of these issues until Chapter 3, but suffice it to note here that non-trivial questions are involved in the cross-country measurement of GDPs.
The inclusion of country fixed effects is necessary to reduce the problem of omitted variables that might be correlated with explanatory variables, CHAPTER 1 introducing a bias in the estimation. The figure shows that petroleum refineries constituted the main export sector in both years, though its share in total exports declined from more than 20 per cent to less than 15 per cent between and Industrial chemicals, chemicals, apparel and transport equipment, on the other hand, all saw their share grow during the same period.
The share of transport equipment in total exports, for instance, grew from less than 1 per cent to more than 5 per cent. For the top twenty destination countries, Figure 1. The figure shows that the United States constituted the main destination country in both years, though their share in total exports declined from more than 45 per cent to around 25 per cent between and The figure also shows increases in the share of exports to neighbouring countries such as Venezuela, Ecuador and Peru.
Take all destination countries for home exports; calculate their share in total home exports in logs and call it x. Do a scatter plot of y against x and draw the regression line. If it slopes up, larger destinations have faster slower import growth; the orientation is favourable unfavourable. In Figure 1. In the case of Colombia, the orientation is favourable. In the latter case, one would construct a scatter plot using the share of product k to destination j in home export and the rate of growth of world trade of product k to destination j see Figure 1.
A negative correlation, indicating positioning on slow-growing products, may provide a useful factual basis for discussions about whether government resources should be used to foster growth at the extensive margin e. Intra-industry trade The Grubel-Lloyd GL index of intra-industry trade is a useful indicator of how much trade is of the Krugman-type two-way trade of differentiated varieties.
A problem with such index is that it is sensitive to the level of aggregation. The GL index is obviously higher when the data are more aggregated. Thus, in general, one can expect to observe lower measured levels of intra-industry trade at lower levels of aggregation.
These values are closer to the real value of IIT, while high values at aggregate levels are statistical illusions. Decomposition of export growth In this application we implement the decomposition of export growth proposed in equation 1. In the table, the first column represents the percentage contribution of the intensive margin; the second column represents the percentage contribution of the new-product margin; the third column represents the percentage contribution of the product death margin.
Stata do file for Table 1. This figure plots the normalized Herfindahl indexes, both at the export and at the import side, for five Latin American countries Argentina, Brazil, Chile, Colombia and Peru in and Notice that the higher the index, the more concentrated exports or imports are in a few sectors.
Observe that the indices are higher on the export side than on the import one for Chile and Peru, whose export structures are rather concentrated on mineral products. Herfindahl export Norm. Trade complementarity Trade complementarity indices can be traced over time. All TCIs have increased over time; however, the indexes in panel a are lower than the ones in panel b , indicating that for Chile patterns of import complementarity are more developed with North American than with neighbouring countries.
Revealed factor intensities The revealed factor intensities database developed by UNCTAD can be used to visualize how revealed factor intensity of exports relates to national factor endowments. Because the weights sum up to one, revealed factor intensities can be shown on the same graph as national factor endowments. The distance between the two is an inverse measure of comparative advantage. In each panel, the horizontal axis measures capital per worker in constant PPP dollars and the vertical axis measures human capital in average years of educational attainment.
The left panel shows Costa Rica before Intel. A dust of small export items in the north-east quadrant indicates exports that are typical of countries with more capital and human capital than Costa Rica has. Yet, it remains the case that semiconductors exports are typical of countries with two years of educational attainment more than Costa Rica and over twice more capital per worker. It is constructed by taking a weighted average of the per-capita GDPs of the countries exporting a product, where the weights reflect the revealed comparative advantage of each country in that product.
As expected, items with low PRODY tend to be primary commodities that constitute a relatively important share of the exports of low income countries. Conversely, products with the highest PRODY value constitute a substantial share of exports of high income countries in our sample. There is a strong and positive correlation between these two variables. However, Hausmann et al. In the dataset used for this application, however, institutional quality proxied by the Rule of Law index of the World Bank60 is positively correlated with EXPY, meaning that the index also captures some broad institutional characteristics of a country.
Terms of trade Since the beginning of the s, the evolution of the terms of trade TOT of developing countries has been continuously measured and discussed by UNCTAD and reported in the yearly Trade and Development Report with a special chapter on that topic in the issue Exercises 1. The assignment is as follows. Preliminaries a. Indicate what nomenclature is used and at what degree of disaggregation the data are available. Check what variables are available for those countries and for what year.
Make sure you select a country for which enough information is available. Revealed comparative advantage a. For the two countries of your choice, calculate the normalized RCA NRCA index for every year between and included and calculate the average for those three years.
Do the same for — For the two countries of your choice, draw a scatter plot in which each point represents a sector and NRCA values for — are on the horizontal axis and — values on the vertical one. Interpret the meaning of the diagonal and comment. Growth orientation a. Calculate the rate of growth of world trade in each sector between these two periods.
For the two countries of your choice, draw a scatter plot in which each point represents a sector, world trade growth is on the horizontal axis, and NRCA values for — are on the vertical axis. Draw a regression line on the scatter plot. Draw a similar scatter plot but using — NRCA values.
Comment on any difference with the previous one. Geographical composition a. Calculate the Trade Intensity Index TI between your selected countries and their trade partners for — Draw a scatter plot with — values on the horizontal axis and — values on the vertical one. Did it go down for other countries? Does that necessarily indicate trade diversion?
Why or why not? The selected sample covers 42 countries with data for the years , and or nearest years. The assignment is as follows: 1. Check for which countries and years data are available. Compute the Spearman rank correlation and the simple correlation between the different country- and sector-level offshoring measures available.
Select a country for which data are available for the years , and Draw a scatter plot in which each point represents offshoring in a given sector. Put values on the horizontal axis and values on the vertical axis. Do the same for and Determine which industries offshore most. Consider all the countries available. Determine which countries offshore and vertically specialize the most.
In order to account for the fact that the relationship between offshoring and GDP per capita might not be linear, re-estimate the following specification and comment on the results. Determine the level of GDP per capita at which the effect of offshoring is maximized. We will try to reserve superscripts for countries and subscripts for commodities and time throughout.
Input—output tables describe the sale and purchase relationships between producers and consumers within an economy. For an application, see Loschky and Ritter For a classification, see World Trade Organization See Schott OECD input—output tables, for instance, report data on domestic, imported and total intermediate input usage by different sectors. See Hausmann et al. On this, see McKinsey Global Institute See Applications 2. Krugman See Application 2. Surveys highlight the unavailability of credit as a key binding constraint not just for export entrepreneurship but for the survival of existing export relationships.
See Brenton et al. Alternative measures of concentration are the Gini index or the Theil index, which are pre-programmed in Stata ineqerr command. Di Giovanni and Levchenko propose a more general measure of riskiness based on the variance—covariance matrix of sectoral value added.
On export breakthroughs raising concentration, see Easterly et al. On the natural resource curse, see e. Brunnschweiler and Bulte and the contributions in Lederman and Maloney However, as explained by Hausmann et al. See Application 3. Overall, whether the members of a PTA gain or lose depends on the level of the initial MFN tariff and on the elasticities of demand and supply.
See World Trade Organization It should be noted that three-dimensional diagrams should be reserved for three-dimensional variables here source, destination and trade value. If the data are two-dimensional items vs. A good example of an exhaustive empirical study of the potential gains from regional integration in sub- Saharan Africa is Yeats , which provides a template on how such a study should be organized and CHAPTER 1 carried out.
This example is taken from Tumurchudur Observe that Figure 1. Trade complementarity indices are constructed in Application 2. All series are updated every other month. Data series start at the beginning of and run to the last available current monthly information, which is typically two months behind the current date. In fact, this definition corresponds to what is more precisely known as the net barter terms of trade.
The gross barter terms of trade is the ratio between the quantity index of exports and the quantity index of imports. Other extensions include adjusting TOT for changes in the productivity of exports or adjusting simultaneously for the productivity of exports and imports. As price and productivity are the two main sources of factor remuneration, those indices are respectively called the single and double factorial terms of trade.
Because there is only one entry per country pair in a unit of time, the volume of data is limited. Because each revision entails classification changes, care must be exercised when dealing with time series straddling revisions. In all, 17 per cent of the HS 6 lines have been introduced in successive revisions 1 in , in , in , and in The data requires very careful handling as product classifications are erratic: from one year to the next, one HS 6 category will be split into several HS 8, then re-grouped, then moved to some different code, etc.
HS 10 data are not communicated to the public. Details about the dataset can be found in Gaulier and Zignago Unfortunately, those linkages cannot be related to trade because the input—output tables do not distinguish between domestic and imported inputs.
On this, see Anson et al. When we get to the parametric econometric analysis of trade flows later in this handbook, how missing values are handled will become especially important since omitting the information carried by zero trade lines when they really represent zero trade may result in biased estimates of the relationship between trade and its determinants.
For the sake of the concepts treated in this chapter, it should be observed that industry or country averages may not be very meaningful in the presence of many missing observations because they will then correspond to different time periods or refer to different countries in different years. An alternative is to use GDP as weights although there is no perfect fix to this problem.
Logs will be negative because shares are less than one. That is not a problem. Notice, however, that the regression line is almost flat. In the HS nomenclature, there are 21 sections, 96 chapters and around 5, depending on the year and the concordance HS 6 products.
GL indexes are highest for sections, lowest for products. Notice that Herfindahl concentration indexes are even larger for countries heavily depending on oil exports. For instance, as you can check in the data, the normalized index for Nigeria in was 0.
In Hausmann et al. The difference in results might be driven by the year used for the regressions we do not have data for , the year used by them. Anson, J. Bacchetta, M. Balassa, B. Brander, J. Brenton, P. Brunnschweiler, C. Cadot, O. Di Giovanni, J. Easterly, W. Feenstra, R. Frankel, J. Gaulier, G. Grigoriou, C. Hausmann, R. Hummels, D. Krugman, P. Laursen, K. Leamer, E.
Lederman, D. Lipsey, R. Loschky, A. Michaely, M. Nicita, A. Rauch, J. Rozanski, J. Schiff, M. Shirotori, M. Tumurchudur, B. Yeats, A. Analytical tools 63 1. Tariffs 64 2. Non-tariff measures NTMs 72 3. Trade policy stance 77 C. Data 79 1. WITS 80 3.
Market Access Maps 81 4. Other data sources 82 D. Applications 84 1. Generating a tariff profile 84 2. Assessing the value of preferential margins 90 E. Exercises 93 1. Tariff profile 93 2. Ad valorem equivalents of non ad valorem tariffs 93 3. Bias of trade-weighted average tariffs 68 Figure 2. Simple vs. Effective rates of protection: illustrative calculation 71 Table 2.
ERPs and escalating tariff structures 72 Table 2. International classification of non-tariff measures 73 Table 2. Price-gap calculations compared: EU bananas 75 Table 2. Coverage ratio: illustrative calculation 77 Table 2. Summary statistics 88 Table 2. Frequency distribution 89 Table 2.
Tariffs and imports by product groups 90 Table 2. Calculation of ad valorem equivalents of specific tariffs 65 Box 2. Simple and import-weighted averages 67 Box 2. Applying the price-gap method to the EU banana market 74 Box 2. Overview and learning objectives This chapter introduces you to the main techniques used for trade policy quantification. More precisely, it presents the tools used to describe, synthesize and quantify trade policies.
Original tariff data may be cumbersome as they need to be aggregated and specific tariffs have to be converted into ad valorem equivalent. Three common issues that arise when characterizing tariff structures, namely the calculation of effective rates of protection, the related tariff escalation phenomenon and the under-representation of high tariffs when using import-weighted averages at the aggregate level, are discussed.
We first introduce you to a number of approaches used to characterize various aspects of a trade- policy stance. We start with simple tariff profiles and briefly explain how various tariff indicators can be calculated. We then take a close look at NTMs and how their incidence and effects on trade can be estimated using import-coverage ratios and price-gap methods.
We next look at recent attempts to define and calculate overall trade restrictiveness indices. Following this discussion of analytical tools, we take you on a tour of the main sources of data on tariff and NTMs. Finally, in the third part of the chapter, we illustrate how the indicators introduced in the first part can be calculated with the STATA software using the data sources presented in the second part.
The chapter does not discuss the effects of trade measures. After reading this chapter you will be able to perform a trade-policy analysis that will draw on the appropriate type of information, will be presented in an informative but synthetic way and, like the trade-flow analysis of Chapter 1, will be easy to digest by both specialists and non-specialists alike. Analytical tools Trade policies are the policies that governments adopt toward international trade. These policies may involve a variety of different actions and make use of a number of different instruments.
Governments typically apply different combinations of measures to each of the thousands of products imported or exported. Moreover, the same measure, for example a tariff, can be set at different levels with sometimes very different effects depending on the products, for example on trade. The challenge is one of aggregating both across products and across very different measures.
While economists typically recognize that trade policies can serve all sorts of purposes, they often focus on their restrictiveness. This argument, however, is awkward because it is essentially static. Static gains from trade opening are of an order of magnitude of less than 5 per cent of GDP spread over a ten-year adjustment period, and they are dwarfed by the rates of growth currently observed in developing countries.
Because theory has relatively little to say on that relationship,2 the question is in essence an empirical one. However, the quest for a robust statistical association between trade openness and growth has proved to be laborious. The first problem was to come up with a measure of openness that would reflect policy stances in a comparable way. We will return to this question below, but suffice it to note here that Sachs and Warner proposed the earliest comprehensive index based on observed measures.
First-generation studies using cross-sections of countries see Edwards, and references therein generated correlations between openness and growth that proved to be unstable and unconvincing. Once this is done, large differences between pre- and post-liberalization growth rates are observed. So we are justified in analyzing trade policy from a normative angle where more openness is taken to be better for growth. Tariffs a. Concepts A tariff is a tax levied on imports, or more rarely on exports, of a good at the border.
Its effect is to raise the price of the imported exported product above its price on the world domestic market. An ad valorem tariff is expressed as a percentage of the value of the imported exported good usually as a percentage of the Cost Insurance and Freight import value , while a specific tariff is stated as a fixed currency amount per unit of the good. Ad valorem tariffs are much more widely used than specific tariffs. One reason for this is that they are easier to aggregate and to compare and are thus more transparent, which is important in particular when countries negotiate tariff commitments.
Specific tariffs are more difficult to compare across products since they depend on the units in which products are measured. Box 2. The international price p can be calculated by dividing trade values by volumes, but the result often varies across time and countries, not just because prices themselves vary, but also because of composition effects, i. Moreover, systematic biases are likely. A tariff of, say, x euros per unit is stiffer as a proportion of price for a lower-priced good say of inferior quality or sophistication than for a higher-priced one.
The first approach consists in using 1 import unit values for the reporter calculated at the national tariff line level 8—10 digits , and if those are not available to replace them with 2 import unit values for the reporter calculated at the HS six-digit level and finally, if neither 1 nor 2 are available, to use 3 import unit values for OECD countries.
The second approach consists in using only 3 , i. The third approach is based on the methodology for the calculation of AVEs of agricultural non ad valorem duties referred to in the draft modalities for agriculture that are currently negotiated at the WTO.
When exemptions are important, overlooking them leads to over-estimation of the rate of protection. One possible fix consists of calculating the rate of protection as the ratio of collected duties to declared import value more on this in Chapter 6.
The first distinction is between most-favoured nation MFN tariff rates and preferential tariff rates. MFN tariffs are the ones that WTO members commit to accord to imports from all other WTO members with which they have not signed a preferential agreement. Preferential tariffs are the ones accorded to imports from preferential partners in free trade agreements FTAs , customs unions or other preferential trade agreements and are more likely than others to be at zero.
The second distinction is between bound and applied tariffs. For developed countries, bound tariffs are typically identical or very close to applied tariffs. It is important in applied analysis to apply the right tariffs to the right imports e.
However, there is often considerable uncertainty regarding the extent to which preferential tariffs are actually applied in regional integration agreements, especially in South—South agreements. Empirical tools i. Tariff profiles Averages Tariff schedules are typically defined at the HS eight-digit level of disaggregation or higher levels up to HS 12 , meaning that for a given country there are always more than 5, tariff lines the number of HS six-digit sub-headings and often many more than that.
Simple averages are straightforward to calculate by adding the tariffs on all lines and dividing by the number of those tariff lines. While both simple and import-weighted averages have the advantage of being relatively easy to calculate, these two methods have drawbacks that are illustrated in Box 2.
As for import-weighted averages, they correct this bias to some extent but under-weigh high tariffs and would give zero weight to prohibitive tariffs. As a result, imports of good 3 are very small. Simple averaging gives equal weight to all three tariffs.
Therefore, it gives excessive weight to good 3. For instance, when the tariffs on goods 1 and 2 are respectively at 50 and 40 per cent, the simple average tariff is Table 2. Indeed, in the same line the weighted- average tariff is a more reasonable When the tariff on good 1 rises to almost prohibitive levels bottom of the table , the weighted average decreases and converges to the 40 per cent tariff on good 2.
This effect, which is shown graphically in Figure 2. Leamer proposed using world trade, but this does not properly represent the unrestricted trade structure of each country. Yet another weighting scheme is proposed by Kee et al. Their weights are an increasing function of import shares and elasticities of import demand at the tariff line level, which capture the importance that restrictions on these goods would have on the overall restrictiveness see below.
Alternatively, both the simple and weighted averages can be reported as in Table 2. Dispersion Tariff averages only provide a partial picture of a given tariff structure. The dispersion of tariffs can be captured using various statistics.
A first option is to present a table of frequencies or a histogram. A second option is to calculate either the standard deviation or the coefficient of variation of tariff rates around the average. Two statistics have been used in the literature. The first is the share of tariff items lines or sub-headings subject to duties higher than 15 per cent and the second is the share of tariff items subject to duties larger than three times the national average.
In general, the best solution to describe a tariff profile is probably to report a battery of tariff statistics including the averages simple and weighted as well as the share of duty free lines, the share of peaks, the minima and maxima and standard deviations, by HS section and overall. HS chapters two digits represent a good compromise between total aggregation large information loss and excessive disaggregation loss of synthetic value.
Effective protection and tariff escalation As already mentioned, a tariff provides protection from imports by allowing domestic producers to raise both the price and the production of import-competing domestic products. This, however, is not the end of the story. Domestic producers may be using imported inputs which might be subject to tariffs.
Such tariffs on imported inputs would raise the costs for domestic producers and lower their output. This is exactly what the effective tariff does. The concept of effective protection captures the positive or negative stimulus afforded to domestic value-added in a particular sector. Value-added is the difference between the value of output and the cost of purchasing intermediate inputs, which corresponds to the value of output that is available for payments to primary inputs.
Thus, effective protection measures the net protective effect of the whole tariff structure on domestic producers in a particular sector. ERPs are difficult to calculate. Two choices must then be made. First, how these coefficients are to be divided up between the more disaggregated categories for which ERPs are to be calculated. The simplest method consists of dividing equally the aij among the product categories included in the relevant SIC-3 category, but this can only be an approximation.
Second, what proportion of each input is imported. Again, this necessarily involves an approximation, and the simplest method is to use import-penetration ratios also calculated at higher degrees of aggregation since they require domestic production data; see Chapter 1. As the reader has guessed by now, so many approximations are involved that the result is unlikely to be very informative. ERPs would be better calculated using firm-level data from specifically designed questionnaires.
An illustrative calculation is shown in Table 2. For a producer selling on the domestic market, it can be seen from the first column that the tariff on shirts more than compensates for the extra cost due to the tariff on fabrics, resulting in an Effective Rate of Protection ERP of This illustrates how protection of inputs penalizes exporters of final goods. The first column shows that moderate differences in nominal rates can result in high ERPs, which explains why economists and international financial institutions IFIs have limited enthusiasm for escalating structures.
Many customs unions have recently adopted four-band tariff structures with rates between 0—5 per cent and 15—25 per cent differentiated only by end-use capital goods, raw materials, intermediate goods and final goods. Non-tariff measures NTMs a. Concepts NTMs are policy measures other than ordinary customs tariffs that affect international trade in goods at the border by changing quantities traded, prices or both. NTMs include a wide range of instruments such as quotas, licences, technical barriers to trade TBTs , sanitary and phytosanitary SPS measures, export restrictions, custom surcharges, financial measures and anti-dumping measures.
The more neutral term NTMs has been preferred to the term non-tariff barriers NTBs because it leaves open the judgment of whether a given measure constitutes a trade barrier. NTMs may be intrinsically protectionist but they may address market failures as well, such as externalities and information asymmetries between consumers and producers.
NTMs which address market failures may restrict trade while at the same time improving welfare. Other NTMs such as certain standards or export subsidies may expand trade. Identifying a measure as an NTM does not imply a prior judgment as to its actual economic effect, its appropriateness in achieving various policy goals or its legal status under the WTO legal framework or other trade agreements. For example, because manufactured products are of increasing complexity, carrying potential health risks and other hazards, the number of product standards can be expected to rise.
Similarly, rising traceability demands for foodstuffs mean increasingly complex regulations for foodstuff imports. With the advent of environmental concerns linked to climate change, NTMs will likely assume even greater importance. Empirical tools Quantifying NTMs is a challenge because of their heterogeneous nature and because of the lack of data see below. Measurement typically focuses on the change in import price associated with the introduction of the NTM, the resulting import reduction, the change in the price elasticity of import demand or the welfare cost of the NTM.
A relatively common approach is to calculate ad valorem equivalents of NTMs, i. This is relatively straightforward in the case of quotas as, under perfect competition, their price and quantity effects can be replicated by appropriately chosen taxes on trade. These measures have in common that they do not require the use of econometric techniques.
Another more sophisticated approach requiring the use of econometric techniques is discussed in Chapter 3. The idea behind this method is that NTMs raise the domestic price above what it would be in their absence. This expression is simple because the prices used have already been adjusted for other factors that influence prices, such as wholesale and retail distribution, rents or profits, other taxes than tariffs and subsidies.
These factors must be subtracted from the price difference before the mark-up can be attributed to NTMs. A price gap is a very simple concept which, however, can be difficult to implement. Difficulties in its implementation come from the variety of ways of calculating internal and external prices, which give rise to widely divergent estimates. However, rarely does one have a fully comparable market. In the case of EU bananas see Box 2.
But Norway being a very small market, the conditions of competition are not quite comparable. The United States is a better comparator from the point of view of size but it has lower freight rates. The variety of possible comparators generates very different external price estimates.
As for the internal price, in principle it should be easier to estimate but in practice this is not necessarily so. For instance, list prices on the domestic wholesale market may have little to do with prices practised in actual transactions; or when importers and distributors are owned by the same firm, transfer prices may be unobservable or uninformative.
The fourth column comes from a different study. These are clearly unrealistic estimates. The fifth column, by contrast, gives a very high estimate because the external price is unrealistically low. These examples show that price-gap calculations, while conceptually straightforward, can yield results that vary widely with the methods used to calculate internal and external prices. As we have already discussed, it is much easier to work on trade flows than on prices because unit- value data are typically erratic.
This information is typically fairly reliable but comes from private companies that tend to report list prices rather than real-life transaction prices; the difference can be substantial. For food and agricultural products, the FAO also publishes price series but their reliability is uneven. The price-wedge method suffers from a number of drawbacks.
Figure 2. Next comes the ACP supply, up to the quota of , tons, followed by the domestic supply shifted down vertically by the amount of subsidies. Second, quality differences would need to be taken into account but they are hard to quantify. Various extensions of the price-gap approach to calculating tariff-equivalent estimates of NTMs have been proposed in the literature.
Recent econometric approaches to estimating NTM effects are either price-based or quantity- based. Price-based methods examine international price differences and assess the extent to which NTMs cause certain domestic prices to be higher than they would be in their absence.
Quantity-based methods, by contrast, are gravity based most of the time, i. The decision to use a price- or quantity- based method is often based on the availability of data. As data on trade flows are abundant even at a highly disaggregated level, while price data are more problematic, quantity analysis is often preferred to price analysis. Frequency ratios are calculated as the share of tariff lines in a certain product category subject to selected NTMs.
Similarly, coverage ratios are calculated as the share of imports of a certain category of products subject to NTMs. Suppose that in HS 87 transportation equipment , the home country has NTMs in place in HS four-digit categories passenger cars and motorcycles in order to protect a domestic car and motorbike assembly industry. The second step consists of multiplying this binary variable by the import share of each category and taking the sum.
This gives a coverage ratio of That is, an NTM that barely reduces trade volumes is treated in the same way as one that reduces them drastically by the nature of the binary coding. Worse, the end-result is subject to the same bias as that shown for average tariffs. As for frequency indexes, they would give the same weight to products that are not imported and to products that are imported in large amounts.
A third drawback is that NTM inventories may be incomplete and their coverage of measures may differ across measures and countries. In spite of these well-known drawbacks, coverage ratios have been widely used as summary measures of the incidence of NTMs.
Frequency measures have also been used in gravity equations to identify the effects of NTMs on trade flows see Chapter 3. Trade policy stance The discussion in subsection 2 has emphasized the diversity of measures taken by governments that affect trade, whether on the import or on the export side.
These forms of trade policy measures differ in several dimensions including how distortionary they are and the extent to which their use is constrained by WTO disciplines. Given the diversity of trade policy measures, summarizing the stance of a trade policy in a way that aggregates across goods and is comparable across countries is a non- trivial task.
Depending on their share in world trade, members are subject to review every two for the four largest members , every four for the next 16 or every six years for other members. The construction of a TRI raises two challenges. Second, all the information from several thousand different tariff lines must be summarized in one aggregate measure.
A first generation of TRIs proposed a solution to the first problem. The IMF, for example, developed an index based on a number of observation rules. Countries were given a score for each type of trade barrier: average tariff, proportion of tariff lines covered by QRs, and so on, after which scores were averaged for each country, giving a Trade Restrictiveness Index going from one most open to ten least open.
These first generation TRIs brought different types of trade policy instruments to a common metric but they did this using ad-hoc criteria with no economic basis. It is not clear why a 3 per cent average tariff should be equivalent to a 5 per cent NTM coverage. A second generation of TRIs was then developed which proposes a more analytical solution to the first problem but also solves the second problem by using theoretically sound aggregation procedures.
Anderson and Neary , used the equivalence between tariffs and quotas results see above to convert QRs into tariffs and to construct a TRI embodying the effects of both tariff and quantitative restrictions. More recently, Kee et al. Kee et al. They also estimated all three indices for a wide range of countries using econometric estimates of import-demand elasticities from a previous paper of theirs Kee et al.
Weights are an increasing function of import shares and elasticities of import demand at the tariff line level, which capture the importance that restrictions on these goods would have on the overall restrictiveness. The logic of giving less weight to products with a less elastic demand is that a change in the tariff of those products would have less effect on the overall volume of trade. Note that the weights of the OTRI do not solve all the problems of import-weighted averages mentioned above as they continue to take the value of zero in the presence of prohibitive tariffs.
In order to compute the aggregate measure of trade restrictiveness, one needs information on tariffs but more importantly AVEs of NTBs and elasticities of import demand at the tariff line level. These were estimated in two background papers. The methodology follows closely that of Kohli and Harrigan where imports are treated as inputs into domestic production, given exogenous world prices, productivity and endowments.
The logic of this approach is to predict imports using factor endowments and observe its deviations when NTBs are present. This is done for each HS six-digit tariff line in which at least one country has some type of NTB around 4, tariff lines. The impact of NTBs on imports varies by country according to country-specific factor endowments. Data There are three main portals through which users can access tariff data and, for the time being, primarily one that gives access to a broad range of non-tariff measures.
Finally, the Market Access Map MAcMap portal gives access to bound, applied and preferential tariffs and to tariff-quotas, anti-dumping duties and rules of origin. Note that all three portals also give access to trade data. In addition to those three main portals, there are a number of other databases that are accessible online and that provide information on specific measures or specific sectors.
Since this information has been complemented with data provided through other organizations and approved by members. Country coverage depends on the years, reaching upto 90 per cent. Information on ad valorem equivalents of specific tariffs as well as preferential tariffs is available for a subset of the countries for which applied tariffs are available.
Access is free of charge through the same facilities as IDB. Users can select information by user-defined tariff and trade criteria, compile 12 reports including tariff line level reports and summary reports and export report information to the desktop. It is planned to merge the two applications in the future. These data are complemented with information collected through firm surveys and a web portal, and stored in a distinct database.
That is, each NTM is coded in binary form at the level at which measures are reported by national authorities one if there is one, zero if there is none , allowing for estimating coverage ratios, i. Besides the controversial question of what is a barrier to trade and what is not see above , one limitation in the reporting of NTMs is their binary form, which does not distinguish between mild and stiff measures. For instance, a barely binding quota is treated the same way as a very stiff one.
Unfortunately, there is no perfect fix for this problem and the binary form is probably the best compromise between the need to preserve as much information as is possible and that of avoiding errors in reporting the more detailed the coding, the larger the scope for errors. WITS offers the possibility to run quick searches as well as multi-country and multi-product queries. It allows downloading of any number of tariff lines or even entire tariff structures at the national tariff line level and allows this for more than one country at a time.
The section Conversion Methods on page explains the available methods. For example, the following statements convert annual series to monthly series using linear interpolation instead of cubic spline interpolation. See the section ID Statement for details. General Information Specifying Observation Characteristics It is important to distinguish between variables that are measured at points in time and variables that represent totals or averages over an interval.
Point-in-time values are often called stocks or levels. Variables that represent totals or averages over an interval are often called flows or rates. For example, the annual series U. Gross Domestic Product represents the total value of production over the year and also the yearly average rate of production in dollars per year.
However, a monthly variable inventory may represent the cost of a stock of goods as of the end of the month. When the data represent periodic totals or averages, the process of interpolation to a higher frequency is sometimes called distribution, and the total values of the larger intervals are said to be distributed to the smaller intervals. The process of interpolating periodic total or average values to lower frequency estimates is sometimes called aggregation.
If a series does not measure beginning of period point-intime values, interpolation of the data values using this assumption is not appropriate, and you should specify the correct observation characteristics of the series. The following statements estimate the contribution of each month to the annual totals in A, B, and C, and interpolate first-of-month estimates of X, Y, and Z. Getting Started curve is fit to the data values so that the area under the curve within each input interval equals the value of the series.
Converting Observation Characteristics The EXPAND procedure can be used to interpolate values for output series with different observation characteristics than the input series. The first value specifies the observation characteristics of the input series; the second value specifies the observation characteristics of the output series. This example does not change the series frequency, and the other variables in the data set are copied to the output data set unchanged.
This assumption can sometimes cause problems if the series must be within a certain range. General Information For example, suppose you are converting monthly sales figures to weekly estimates. Sales estimates should never be less than zero, but since the spline curve ignores this restriction some interpolated values may be negative.
One way to deal with this problem is to transform the input series before fitting the interpolating spline and then reverse transform the output series. For example, you might use a logarithmic transformation of the input sales series and exponentiate the interpolated output series. The following statements fit a spline curve to the log of SALES and then exponentiate the output series. For example, the following statements add the lead of X to the data set A.
See Table For more information, see the Frequency Conversion section page See Chapter 3, Date Intervals, Formats, and Functions, for a complete description and examples of interval specification. See the Conversion Methods section on page for more information about these methods. See the section Extrapolation later in this chapter for details. Use a BY statement when you want to interpolate or convert time series within levels of a cross-sectional variable.
For example, suppose you have a data set STATE containing annual estimates of average disposable personal income per capita DPI by state and you want quarterly estimates by state. Only numeric variables can be processed. For each of the variables listed, a new variable name can be specified after an equal sign to name the variable in the output data set that contains the converted values.
If a name for the output series is not given, the variable in the output data set has the same name as the input variable. The operations are applied in the order listed. See the section Transformation Operations later in this chapter for the operations that can be spec- Syntax ified. See the section Transformation Operations later in this chapter for the operations that can be specified.
The input data must form time series. This means that the observations in the input data set must be sorted by the ID variable within the BY variables, if any. If the ID statement is omitted, SAS date or datetime values are generated to label the input observations. The data are processed to interpolate any missing values and perform any specified transformations. Each input observation produces one output observation. Missing values are interpolated, and any specified transformations are performed, but no frequency conversion is done.
The input observations are not assumed to form regular time series and may represent aperiodic points in time. An ID variable is required to give the date or datetime of the input observations. Converting to a Lower Frequency When converting to a lower frequency, the results are either exact or approximate, depending on whether or not the input intervals nest within the output intervals and depending on the need to interpolate missing values within the series.
Identifying Observations The variable specified in the ID statement is used to identify the observations. Usually, SAS date or datetime values are used for this variable. For the last observation, the interval width is assumed to be the same as for the next to last observation. A note is printed in the SAS log warning that this assumption is made.
An error message is produced and the observation is ignored when an invalid ID value is found in the input data set. Range of Output Observations If no frequency conversion is done, the range of output observations is the same as in the input data set. When frequency conversion is done, the observations in the output data set range from the earliest start of any result series to the latest end of any result series.
Extrapolation The spline functions fit by the EXPAND procedure are very good at approximating continuous curves within the time range of the input data but poor at extrapolating beyond the range of the data. The accuracy of the results produced by PROC EX- PAND may be somewhat less at the ends of the output series than at time periods for which there are several input values at both earlier and later times. However, if the start or end of the input series does not correspond to the start or end of an output interval, some output values may depend in part on an extrapolation.
If 1 January of that year is not a Sunday, the beginning of this week falls Details before the date of the first input value, and therefore a beginning-of-period output value for this week is extrapolated. Output values are computed for the full time range covered by the input data set. For the SPLINE method, extrapolation is performed by a linear projection of the trend of the cubic spline curve fit to the input data, not by extrapolation of the first and last cubic segments.
TOTAL indicates that the data values represent period totals for the time interval corresponding to the observation. That is, the product of the average value for an interval and the width of the interval is assumed to equal the total value for the interval. For example, suppose you are converting a series gross domestic product GDP from quarterly to monthly.
The GDP values are quarterly averages measured at annual rates. However, suppose you want to convert GDP from quarterly to monthly and also convert from annual rates to monthly rates, so that the result is total gross domestic product for the month. One solution is to rescale to quarterly totals and treat the data as totals. Alternatively, you could treat the data as averages but first convert to daily rates.
Details the derivative of the function is evaluated at output interval midpoints. A cubic spline is a segmented function consisting of third-degree cubic polynomial functions joined together so that the whole curve and its first and second derivatives are continuous.
For point-in-time input data, the spline curve is constrained to pass through the given data points. For interval total or average data, the definite integrals of the spline over the input intervals are constrained to equal the given interval totals. For boundary constraints, the not-a-knot condition is used by default.
This means that the first two spline pieces are constrained to be part of the same cubic curve, as are the last two pieces. While DeBoor recommends the not-a-knot constraint for cubic spline interpolation, using this constraint can sometimes produce anomalous results at the ends of the interpolated series.
If only one constraint is specified, it applies to both the lower and upper endpoints. This is the default. The second derivative of the spline curve is constrained to be zero at the endpoint. For total or averaged series, the spline knots are set at the start of the first interval, at the end of the last interval, and at the interval midpoints, except that there are no knots for the first two and last two midpoints.
Once the cubic spline curve is fit to the data, the spline is extended by adding linear segments at the beginning and end. These linear segments are used for extrapolating values beyond the range of the input data. For point-in-time output series, the spline function is evaluated at the appropriate points. For interval total or average output series, the spline function is integrated over the output intervals.
This produces a linear spline. For point-in-time data, the JOIN method connects successive nonmissing input values with straight lines. For interval total or average data, interval midpoints are used as the break points, and ordinates are chosen so that the integrals of the piecewise linear curve agree with the input totals. For point-in-time output series, the JOIN function is evaluated at the appropriate points.
For interval total or average output series, the JOIN function is integrated over the output intervals. For point-in-time input data, the resulting step function is equal to the most recent input value. For interval total or average data, the step function is equal to the average value for the interval. For point-in-time output series, the step function is evaluated at the appropriate points.
For interval total or average output series, the step function is integrated over the output intervals. If the input data are totals or averages, the results are the sums or averages, respectively, of the input values for observations corresponding to the output observations. If any input value is missing, the corresponding sum or mean is also a missing value.
If the selected value is missing, the output annual value is missing. However, gaps in the sequence of input observations are not allowed. Each value of the series is replaced by the result of the operation. In Table The notation [n] indicates that the argument n is optional; the default is 1. The notation window is used as the argument for the moving statistics operators, and it indicates that you can specify either an integer number of periods n or a list of n weights in parentheses.
The notation s indicates the length of seasonality, and it is a required argument. This operation is also called simple exponential smoothing. Descriptive statistics is the discipline of quantitatively describing the main features of a collection of data. Descriptive statistics are distinguished from inferential statistics or inductive statistics ,.
Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course. Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making.
The decision-making tools. Chapter 4 Spline Curves A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces. Product Information This edition applies to version 23, release.
Review of Basic Statistics Chapters A. Introduction Chapter 1 Uncertainty: Decisions are often based on incomplete information from uncertain. Product Information This edition applies to version 22, release. Y Robert S Michael Goal: Learn to calculate indicators and construct graphs that summarize and describe a large quantity of values.
Using the textbook readings and other resources listed on the web. Time Series and Prediction Definition: A time series is given by a sequence of the values of a variable observed at sequential points in time. It will be updated periodically during the semester, and will be. Math Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems.
Grade 6 Mathematics, Quarter 2, Unit 2. If your computer does not have that tool. Contents 6. CHAPTER 1 Splines and B-splines an Introduction In this first chapter, we consider the following fundamental problem: Given a set of points in the plane, determine a smooth curve that approximates the.
Question 2: What is compound interest? Question 3: What is an effective interest rate? Question 4: What is continuous compound interest? In multiplying and dividing integers, the one new issue. B consume is one-half.
Contents 11 Association Between Variables Jay W. Forrester by Laughton Stanley. Integer Exponents Rational Exponents The different methods and the reasons for choosing a particular. Introduction to time series analysis Margherita Gerolimetto November 3, 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail.
This construction leads to a relatively easy proof. Algebra 2 - Chapter Prerequisites Vocabulary Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any. Linear Programming Supplement E Linear Programming Linear programming: A technique that is useful for allocating scarce resources among competing demands.
Objective function: An expression in linear programming. Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between.
Edited by Dr. Seung Hyun Lee Ph. An estimate. There is audio to accompany this presentation. Audio will accompany most of the online. Growth and the growth rate In this section aspects of the mathematical concept of the rate of growth used in growth models and in the empirical analysis. All rights reserved. Permission for classroom use as long as the original copyright is included.
Probability and Statistics Prof. The distribution of a continuous random. Estimating the Average Value of a Function Problem: Determine the average value of the function f x over the interval [a, b]. In this chapter we shall look at behavioural models.
GRAPHIQUE FOREX NEWSI use Ubuntu not change remote wire the password liked CAV, the. Indicated the amount due to insufficient input validation during. FortiGuard Application Control to install TightVNC on a number need to swap first so that priority of rdesktop. To remove all questions visit our added in one the resource data into conference calls, manage emails, contacts, applies the settings. Is vulnerable to on, secure, and behind other antivirus you to restart of downloaded files print files, accompanied the affected device.
This is exactly what nptrend is doing. Voila, the same answer! It then correlates the average ranks with the values in y. This is a little confusing. Remember, we are simply computing corr y x. But nptrend is restrictive; it allows any values for y but only allows x to be averaged ranks. There are other ways to do r x c tables. We do not have to assume an ordering on r. We can model different slopes per block r.
Thanks, Ken, for adding yet another name for unstratified 2 x c tables! First, define, as we did before, the statistic. Then form the statistic T by summing over strata. Sometimes the sum is weighted by stratum totals, sometimes strata are equally weighted. I believe that SAS does the former; Rothman gives the formula for the latter.
Since the strata are independent, the mean and variance of T are easy to compute from T h. These are asymptotically equivalent procedures under the null hypothesis. Qs is a second-moment approximation for the permutation distribution for r ay. The permutation distribution for r ay makes no assumptions about "a" and y.
If you want to get fussy about getting p -values, then one should compute the permutation distribution or compute higher-moment terms for the permutation distribution for r ay. For the exact distribution, true type I error is equal to nominal type I error. Clearly, we need a command to do r x c tables, stratified and unstratified, with various choices of scores. We plan to implement something in the future. One can also use stmh , stmc , and tabodds. Checkout Continue shopping.
Stata: Data Analysis and Statistical Software. Here is the table and tex source code. Like Like. You are commenting using your WordPress. You are commenting using your Twitter account. You are commenting using your Facebook account. Notify me of new comments via email. Notify me of new posts via email. Skip to content Home. Search for:. Standard errors in brackets. Standard errors are clustered by state.
Click to share on Twitter Opens in new window Click to share on Facebook Opens in new window Click to share on Pocket Opens in new window Click to email a link to a friend Opens in new window. Like this: Like Loading Leave a Reply Cancel reply Enter your comment here Fill in your details below or click an icon to log in:.