This exploratory paper presents a new method to measure the output of government-provided secondary education in the Netherlands. Transferred knowledge and skills, and not pupil-hours, are used as the principal output measure of education. The approach developed involves a transformation of traditional unit cost weights.
This paper by Floris van Ruth describes two methods for computing a monthly statistic based on a lower frequency, quarterly reference statistic and related monthly indicators. Both methods are based on a state space formulation and the Kalman filter. The first method is an interpolation methods, which is used to produce the monthly indicator of fixed capital formation development. The alternative method is a cumulative approach. Both methods produce credible and reliable monthly statistics. Crucial is the availability of relevant monthly indicators, in this case indicators of production and imports of capital goods.
Competition can be good or bad for innovation. In this paper a model is tested in which an increase of competition stimulates innovation when competition is low, but where innovation is discouraged when the level of competition goes beyond a certain threshold. We use industry- as well as firm-level data, and find evidence for such a threshold using two different competition measures.
This paper by Floris van Ruth describes the concept, working and outcomes of the Statistics Netherlands export, consumption and fixed capital formation radars. These are tools for monitoring and showing how conditions develop for the growth of the target key macro-economic indicators. By showing in one diagram indicators representing the driving factors for the relevant macro-economic quantity, a general picture is given of how conditions for this central indicator are developing. The graphic and dynamic character of the radar-concept allows for easy and quick analysis. This study presents the indicators selected for the export, consumption and fixed capital formation radars and their properties. From these radars a conditions indicator can be derived. The significance of the conditions for explaining the development of the central indicators is tested, as well as their predictive powers.
A small number of very big enterprises dominates R&D in the Netherlands. Some of these enterprises have their headquarters in the country, some have moved out, some have always been foreign-based. We have already seen many mergers and take-overs, and we expect to see more.In the last two decades or so, the organisation of R&D within the big enterprises in the manufacturing industry has changed. Instead of large centralized laboratories we now see more smaller decentralized R&D labs. Apart from the big enterprises engaged in R&D, we also see a large number of small and medium enterprises, focusing on R&D as their main activity.
The monthly unemployment rate is based on the data of the Dutch Labour Force Survey. In this paper a structural time series model is developed and applied to estimate the monthly unemployment rate for six domains. The estimation results under this model are compared with the generalized regression estimates and the estimates under some simpler models. With univariate structural time series models, information from other time periods is borrowed to improve the precision of the estimates. Further improvements are possible by borrowing information from other domains in a multivariate structural time series model. It turns out that the trends of the six domains are cointegrated. Only three common trends have to be estimated, which means that the information from other domains is used in an efficient way. Further improvements can be achieved by modelling outliers and by modelling the seasonal component in an efficient way. The standard error of the estimates is approximately halved by the time series approach, compared to the generalized regression estimator.
This paper explains the OQM model developed by SN, and describes nine applications of the model. The applications vary from large-scale (TQM and process assurance) to small-scale. They demonstrate that the concept of quality areas is both powerful and flexible, and can be used in any domain.
A major problem that has to be faced by basically all institutes that collect statistical data on persons or enterprises is that data may be missing in the observed data sets. The most common solution to handle missing data is imputation. At national statistical institutes and other statistical institutes, the imputation problem is further complicated owing to the existence of constraints in the form of edit restrictions that have to be satisfied by the data. Examples of such edit restrictions are that someone who is less than 16 years old cannot be married in the Netherlands, and that someone whose marital status is unmarried cannot be the spouse of the head of household. Records that do not satisfy these edits are inconsistent, and are hence considered incorrect. Another additional problem for categorical data is that the frequencies of certain categories are sometimes known from other sources or have already been estimated. In this paper we develop imputation methods for categorical data that take these edits and known frequencies into account while imputing a record.
In this paper we show that scale effects, market structure, and regulation determine the poor productivity performance of the European business services industry. We apply parametric and nonparametric methods to estimate the productivity frontier and subsequently explain the distance of firms to the productivity frontier by market characteristics, entry- and exit dynamics and national regulation. The frontier is assessed using detailed industry data panel for 13 EU countries. Our estimates suggest that most scale advantages are exhausted after reaching a size of 20 employees. This scale inefficiency is persistent over time and points to weak competitive selection. Market and regulation characteristics explain the persistence of X-inefficiency (sub-optimal productivity relative to the industry frontier). More entry and exit are favourable for productivity performance, while higher market concentration works out negatively. Regulatory differences also appear to explain part of the business services' productivity performance. In particular regulation-caused exit and labour reallocation costs have significant and large negative impacts on the process of competitive selection and hence on productivity performance.
At national statistical institutes experiments embedded in ongoing sample surveys are frequently conducted, for example to test the effect of modifications in the survey process on the main parameter estimates of the survey, to quantify the effect of alternative survey implementations on these estimates, or to obtain insight in the various sources of non-sampling errors. A design-based analysis procedure for factorial completely randomized designs and factorial randomized block designs embedded in probability samples is proposed in this paper. Design-based Wald statistics are developed to test whether estimated population parameters, like means, totals and ratios of two population totals, that are observed under the different treatment combinations of the experiment are significantly different. The methods are illustrated with a real life application of an experiment embedded in the Dutch Labor Force Survey.
Statistical processes can be very complex, and it is not uncommon that they are designed and implemented as one big tangle of statistical activities. This paper is an initiative to structure and standardise the processing of statistical data. Concepts like ‘standard process step’ and ‘standard process’ are introduced and explained by means of both a non-statistical example (fixing a flat bike tyre) and a statistical example (matching two data files).
Measurement of labour market flows depends on three major aspects of the job definition, namely (i) the size of the job; (ii) the length of the job, and (iii) whether in accordance to national accounting rules, jobs are identified with labour contracts, or whether in accordance to labour demand theory, unfilled vacancies are also counted as jobs. This paper looks at the sensitivity of measuring job and worker flows with respect to these alternatives for the job definition. Measurement of labour market dynamics appears especially sensitive for the dynamic dimension of the job definition, namely that of (minimum) job length.
The paper describes the application of the Object Oriented Quality Management model to the object secondary data sources. The results obtained are compared to those of the, independently developed, Quality framework for Administrative Data Sources. An administrative data source is an example of a secondary data source. This exercise was performed to enable the evaluation of the strengths and weaknesses of the quality management model and the completeness of the quality framework.
This paper describes some of the methodological problems encountered with the change-over from the NACE Rev. 1.1 to the NACE Rev. 2 in business statistics. Different sampling and estimation strategies are proposed to produce reliable figures for the domains under both classifications simultaneously. Furthermore several methods are described that can be used to reconstruct time series for the domains under the NACE Rev. 2.
Two univariate outlier detection methods are introduced. In both methods, the distribution of the bulk of observed data is approximated by regression of the observed values on their estimated QQ plot positions using a model cummulative distribution function.
This paper presents the methods used for compiling balance sheets for consumer durables in the Netherlands. The Perpetual Inventory Method is used to convert time series of consumption of consumer durables to wealth stocks. Consumer durables are a memorandum item in the non-financial balance sheets.
This paper describes some new developments in survey methodology that may help to solve problems of survey taking in official statistics. The R-indicator is described as an additional indicator for survey quality. Web surveys are considered as a cheaper means of data collection, either as a single-mode survey or as one of the modes in a mixed-mode survey. Also attention is paid to more flexible ways of conducting the fieldwork of a survey. The R-indicator could play a role in this.
Statistical agencies have to ensure that respondents’ private information cannot be revealed from the tables they release. A well-known protection method is cell suppression, where values that provide too much information are left out from the table to be published. In a first step, sensitive cell values are suppressed. This is called primary suppression. In a second step, other values are suppressed as well to exclude that primarily suppressed values can be re-calculated from the values published in the table. This second step is called secondary cell suppression
Macro-integration is the process of combining data from several sources at an aggregate level. We review a Bayesian approach to macro-integration with special emphasis on the inclusion of inequality constraints. In particular, an approximate method of dealing with inequality constraints within the linear macro-integration framework is proposed. This method is based on a normal approximation to the truncated multivariate normal distribution. The framework is then applied to the integration of international trade statistics and transport statistics. By combining these data sources, transit flows can be derived as differences between specific transport and trade flows. Two methods of imposing the inequality restrictions that transit flows must be non-negative are compared. Moreover, the figures are improved by imposing the equality constraints that aggregates of incoming and outgoing transit flows must be equal.
In this paper, we consider the situation where data are collected from both registers and sample surveys. We show how the bootstrap resampling method can be applied in this situation, in order to obtain insight in the accuracy of statistics based on combined data. The method is applied to the Dutch Educational Attainment File.
In 2008, there was a transition in the survey that measures perceived and actual safety in the Netherlands. In this paper by Kraan, Van den Brakel, Buelens, and Huys, it is attempted to explain some significant discontinuities caused by this transition. The observed discontinuities can be explained with the introduction of new data collection modes, Web survey and Self Completion Paper Questionnaire.
This paper by Floris van Ruth describes how the current stance of the business cycle can be derived from a mixed set containing a limited number of selected indicators. Using the Netherlands and the United States as examples, it is shown that it is possible to extract the business cycle from a mix of leading, coincident and lagging indicators, as long as the set is not skewed towards one particular type of indicator. Different methods, both direct and indirect, of deriving the common business cycle are used to test the robustness of the results. The similarity between the common cycles resulting from the different methods is seen as a confirmation of the validity of the common cycle interpretation of the business cycle.
Benchmarking is the process to achieve mathematical consistency between low-frequency (e.g. annual) and high-frequency (e.g. quarterly) data. Statistics Netherlands is going to apply a new benchmarking method on Dutch national accounts. The new benchmarking method is based on a multivariate Denton method, presented in Bikker and Buijtenhek (2006). In order to incorporate all economic relations into this model, we extended it with new methodological features, such as ratio constraints, soft constraints and inequality constraints. In this paper the new extended multivariate Denton method is presented.