LAPORAN AKHIR PRAKTIKUM EKONOMETRIKA E Disusun Oleh: Graceby Indonesia karena merupakan bahan makanan penghasil karbohidrat kedua . |_ file 33 D:\DOC MATA KULIAH\DOC SHAZAM\Data Analisis Produksi Usahatani. Label: KUMPULAN MATERI KULIAH SAYA Dengan demikian, Ekonometrika adalah ilmu yang mencakup teori ekonomi, matematika, dan statistika dalam. Tulisan Terakhir. Materi Kuliah · Selamat Datang · Hello world! Blog Ekonometrika · Blog di Privasi & Cookie: Situs ini menggunakan cookie.
|Published (Last):||1 April 2006|
|PDF File Size:||6.71 Mb|
|ePub File Size:||5.68 Mb|
|Price:||Free* [*Free Regsitration Required]|
Kirimkan Ini lewat Email BlogThis! Dari Wikipedia bahasa Indonesia, ensiklopedia bebas Langsung ke: Ekonometrika adalah ilmu yang membahas masalah pengukuran hubungan ekonomi. Dengan demikian, Ekonometrika adalah ilmu yang mencakup teori ekonomi, matematikadan statistika dalam satu kesatuan sistem yang bulat, menjadi suatu ilmu yang berdiri sendiri dan berlainan dengan ilmu ekonomi; matematika; maupun statistika. Ekonometrika digunakan sebagai alat analisis ekonomi kliah bertujuan untuk menguji kebenaran teorama-teorama teori ekonomi yang berupa hubungan antarvariabel ekonomi dengan data empirik.
Teorama-teorama yang persifat apriori pada ilmu ekonomi dinyatakan terlebih dahulu dalam bentuk matematik sehingga dapat dilakukan pengujian terhadap teorama-teorama itu. Bentuk matematik teorama ekonomi ini disebut model. Pembuatan model ekonometri merupakan salah satu sumbangan ekonometrika di samping pembuatan prediksi peramalan atau forecasting dan pembuatan berbagai keputusan alternatif yang bersifat kuantitatif sehingga dapat mempermudah bagan pengambil keputusan untuk menentukan pilihan.
Salah satu bagian paling penting dari ekonometri adalah analisis regresi. Analisis ini digunakan untuk mengetahui kaitan antara satu variabel dengan variabel yang lain. Berdasarkan data yang digunakan, ekonometri dibagi menjadi tiga analisis, yaitu analisis runtun waktu time seriesantar-wilayah cross sectiondan analisis data panel. Analisis runtun waktu menjelaskan hahan perilaku suatu variabel sepanjang beberapa waktu berturut-turut, berbeda dengan analisis antar-wilayah yang menjelaskan antara beberapa daerah dalam satu waktu tertentu snapshot.
Sementara itu analisis data panel menggabungkan antara data runtun waktu dengan data antar-wilayah. In statisticslinear regression is an kuliahh to modeling the relationship between a scalar dependent variable y and one or more explanatory variables denoted X.
The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, it is called multiple linear regression.
This in term should be distinguished from multivariate linear regressionwhere multiple correlated dependent variables are predicted, [ citation kuilah ] rather than a single scalar variable. In linear regression, data are modelled using linear predictor functionsand unknown model parameters are estimated from the data. Such models are called linear models. Most commonly, linear regression refers to a model in which the conditional mean of y given the value of X is an affine function of X.
Less commonly, linear regression could refer to a model in which the kupiahor some other quantile of the conditional distribution of y given X is ku,iah as a linear function of X. Like all forms of regression analysislinear regression focuses on the conditional probability distribution of y given Xrather than on the joint probability distribution of y and X ekoometrika, which ekoonmetrika the domain of multivariate analysis.
Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit bwhan models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine.
Linear regression has many practical uses. Most applications fall into one of the following two broad categories: If the goal is predictionor forecastinglinear regression can be used to fit a predictive model to an observed data set of ekonometrikq and X values.
After developing ekonomerrika a model, if an additional value of X is then given without its accompanying value of ythe fitted model can be used to make a prediction of the value of y. Given a variable y and a number of variables X 1Linear regression models are often fitted using the least squares approach, but they may also be fitted in other ways, such as by minimizing the kuliaj of fit” in some other norm as with least absolute deviations regressionor by minimizing a penalized version of the least squares loss function as in ridge regression.
Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms “least squares” and “linear model” are closely linked, they are not synonymous.
Given a data set of n statistical unitsa linear regression model assumes eionometrika the relationship between the dependent variable y i and the p -vector of regressors x i is linear.
Often these n equations are stacked together and written in vector form as where Some remarks on terminology and general use: The decision as to which variable in a data set is modeled as the dependent variable and which are modeled as the independent variables may be based on a presumption that the value of one of ekonomerika variables is caused by, or directly influenced by the other variables.
Alternatively, there may kuliau an operational reason to model one of the variables in terms of the others, in which case there need be no presumption of causality.
The matrix is sometimes called the design matrix. Usually a constant is included as one of the regressors. Many statistical inference procedures for linear models require an intercept to be present, so it is often included even if theoretical considerations suggest that its value should be zero. Sometimes one of the regressors can be a non-linear function of another regressor or of the data, as in polynomial regression and segmented regression.
The regressors x ij may be baha either as random variableswhich we simply observe, or they can be considered as predetermined fixed values which we can choose. Both interpretations may be appropriate in different cases, and they generally lead to the same estimation procedures; however different approaches to asymptotic analysis are used in these two situations.
Its elements are also called effectsor regression coefficients. This variable captures all other factors which influence the dependent variable y i other than kjliah regressors x i. The relationship between the error term and the regressors, for example whether they are correlatedis a crucial step in formulating a linear regression model, as it will determine the method to use for estimation. Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent h i at various moments in time t i.
Numerous extensions have been developed that allow each of these assumptions to be relaxed i. Some methods are general enough that they can relax multiple assumptions at once, and in other cases this can be achieved by combining different extensions. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to get an accurate model.
The following are the major assumptions made by standard linear regression models with standard estimation techniques e. This essentially means that the predictor variables x can bagan treated as fixed values, rather than random variables. This means, for example, that the predictor variables are assumed to be error-free, that is they are not contaminated with measurement errors. Although not realistic in many settings, dropping this assumption leads to significantly more difficult errors-in-variables models.
About | Blog Ekonometrika
This means that the mean of the response variable is a linear combination of the parameters regression coefficients and the predictor variables. Note that this assumption is much less restrictive than it may at first seem. Because the predictor variables are treated as fixed values see abovelinearity is really only a restriction on the parameters. The predictor variables themselves can be arbitrarily transformed, and in fact multiple copies of the same underlying predictor variable can be added, each one transformed differently.
This trick is used, for example, in polynomial regressionwhich uses linear regression to fit the response variable as an arbitrary polynomial function up to a given rank of a predictor variable.
Bahan Kuliah Ekonometrika
This makes linear regression an extremely powerful inference method. In fact, kulish such as polynomial regression are often “too powerful”, in ouliah they tend to overfit the data. As a result, some kind of regularization must typically be used to prevent unreasonable solutions coming out of the estimation process.
Common examples are ridge regression and lasso regression. Bayesian linear regression can also be used, which by its nature is more or less immune to the problem of overfitting. In fact, ridge ekonometeika and lasso regression can both be viewed as special ekonomeyrika of Bayesian linear regression, with particular types of prior distributions placed on the regression coefficients. Constant variance aka homoscedasticity. This means that different response variables have the same variance in their errors, regardless of the values of the predictor variables.
In practice this assumption is invalid i. In order to determine for heterogeneous error variance, or when a pattern of residuals violates model assumptions of homoscedasticity error is equally variable around the ‘best-fitting line’ for all points of xit is prudent to look for a “fanning effect” between residual error and predicted values. This is to say there will be a systematic change in the absolute or squared residuals when plotted against the predicting outcome. Error will not be evenly distributed across the regression line.
,uliah will result in the averaging over of distinguishable variances around the points to get a single variance that is inaccurately representing all the variances of the line.
In effect, residuals appear clustered and spread apart on their predicted plots for larger and smaller values for points along the linear regression line, bshan the mean squared error for the model will be wrong.
Typically, for example, a response variable whose mean is large will have a greater variance than one whose mean is small.
In fact, as this shows, in many cases — often the same cases where the assumption of normally distributed errors fails — the variance or standard deviation should be predicted to be proportional to the mean, rather than constant. Simple linear regression estimation methods give less precise parameter estimates and misleading inferential quantities such as standard errors when substantial heteroscedasticity is present.
However, various estimation techniques e. Bayesian linear regression techniques can also be used when the variance is assumed to be a function of the mean. It is also possible in some cases to fix the problem by applying a transformation to the response variable e. Eionometrika assumes that the errors of the response variables are uncorrelated with each other.
Actual statistical independence is a stronger condition than mere lack of correlation and is often not needed, although it can be exploited if it is known to hold. Bayesian linear regression is a general way of handling this issue. Lack of multicollinearity in the predictors. For standard least squares estimation bhan, the design matrix X must have full column rank p ,; otherwise, we have a condition known as multicollinearity in the predictor variables.
This can be triggered by having two or more perfectly correlated predictor variables e. It can also happen if there is too little data available compared to the number of parameters to be estimated e. At most we will be able to identify some of the parameters, i. See partial least squares regression. Methods for fitting linear models with ekonometri,a have been developed; [ 1 ] [ 2 ] [ 3 ] [ 4 ] some require additional assumptions such as “effect sparsity” — that a large fraction of the effects are exactly zero.
Note that the more computationally expensive iterated algorithms for parameter estimation, such as those used in generalized linear modelsekonometroka not suffer from this problem — and in fact it’s quite normal to when handling categorically-valued predictors to introduce a separate indicator variable predictor for each possible category, which inevitably introduces multicollinearity. Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods: The statistical relationship between the error terms and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent.
Menjelaskan secara terperinci tentang langkah-langkah mengolah data dengan Eviews 5, disertai dengan gambar-gambar tampilan Eviews 5 pada setiap langkah pengelolaan data, dan disertai teori-teori ekonometrika pada setiap metode pengelolaan data. Program Eviews 5 Apakah Eviews 5 itu? SPSS adalah sebuah program komputer yang digunakan untuk membuat analisis statistika.
SPSS adalah salah satu program yang paling banyak digunakan untuk analisis statistika ilmu sosial. SPSS digunakan oleh peneliti pasar, peneliti kesehatan, perusahaan survei, pemerintah, peneliti pendidikan, organisasi pemasaran, dan sebagainya. Selain analisis statistika, manajemen data seleksi kasus, penajaman file, pembuatan data turunan ekonometika dokumentasi data kamus metadata ikut dimasukkan bersama data juga merupakan fitur-fitur dari software dasar SPSS.
Mengaktifkan program SPSS dapat dilakukan dengan dua cara:. Artikel selengkapnya bisa diunduh di sini: