NEWS


jtools 2.3.0 (2024-08-25)

Bug fixes:

Enhancements:

Other changes:

jtools 2.2.2 (2023-07-11)

Bug fix:

Several enhancements:

jtools 2.2.1 (2022-12-01)

Important accuracy bug fix:

Other changes:

jtools 2.2.0 (2022-04-25)

Accuracy bug fixes:

Other bug fixes:

Enhancements:

Miscellaneous changes:

jtools 2.1.4 (2021-09-03)

jtools 2.1.3 (2021-03-12)

jtools 2.1.2 (2021-01-07)

jtools 2.1.1 (2020-11-16)

Bugfixes:

jtools 2.1.0 (2020-06-23)

New:

Bugfixes:

jtools 2.0.5 (2020-04-21)

Hotfix: Fixing failing tests on CRAN.

jtools 2.0.4

Hotfix release:

jtools 2.0.3 (2020-03-21)

New features:

jtools 2.0.2 (2020-01-24)

Minor release.

Fixes:

Other changes:

jtools 2.0.1 (2019-04-08)

Minor release.

Fixes:

Other changes:

jtools 2.0.0 (2019-02-08)

Big changes.

New spinoff package: interactions

To reduce the complexity of this package and help people understand what they are getting, I have removed all functions that directly analyze interaction/moderation effects and put them into a new package, interactions. There are still some functions in jtools that support interactions, but some users may find that everything they ever used jtools for has now moved to interactions. The following functions have moved to interactions:

Hopefully moving these items to a separate package called interactions will help more people discover those functions and reduce confusion about what both packages are for.

Important changes to make_predictions() and removal of plot_predictions()

In the jtools 1.0.0 release, I introduced make_predictions() as a lower-level way to emulate the functionality of effect_plot(), interact_plot(), and cat_plot(). This would return a list object with predicted data, the original data, and a bunch of attributes containing information about how to plot it. One could then take this object, with class predictions, and use it as the main argument to plot_predictions(), which was another new function that creates the plots you would see in effect_plot() et al.

I have simplified make_predictions() to be less specific to those plotting functions and eliminated plot_predictions(), which was ultimately too complex to maintain and caused problems for separating the interaction tools into a separate package. make_predictions() by default simply creates a new data frame of predicted values along a pred variable. It no longer accepts modx or mod2 arguments. Instead, it accepts an argument called at where a user can specify any number of variables and values to generate predictions at. This syntax is designed to be similar to the predictions/margins packages. See the documentation for more info on this revised syntax.

make_new_data() is a new function that supports make_predictions() by creating the data frame of hypothetical values to which the predictions will be added.

Generate partial residuals for plotting

I have added a new function, partialize(), that creates partial residuals for the purposes of plotting (e.g., with effect_plot()). One negative when visualizing predictions alongside original data with effect_plot() or similar tools is that the observed data may be too spread out to pick up on any patterns. However, sometimes your model is controlling for the causes of this scattering, especially with multilevel models that have random intercepts. Partial residuals include the effects of all the controlled-for variables and let you see how well your model performs with all of those things accounted for.

You can plot partial residuals instead of the observed data in effect_plot() via the argument partial.residuals = TRUE or get the data yourself using partialize(). It is also integrated into make_predictions().

New programming helpers

In keeping with the "tools" focus of this package, I am making available some of the programming tools that previously had only been used internally inside the jtools package.

%nin%, %not%, and %just%

Many are familiar with how handy the %in% operator is, but sometimes we want everything except the values in some object. In other words, we might want !(x %in% y) instead of x %in% y. This is where %nin% ("not in") acts as a useful shortcut. Now, instead of !(x %in% y), you can just use x %nin% y. Note that the actual implementation of %nin% is slightly different to produce the same results but more quickly for large data. You may run into some other packages that also have a %nin% function and they are, to my knowledge, functionally the same.

One of my most common uses of both %in% and %nin% is when I want to subset an object. For instance, assume x is 1 through 5, y is 3 through 7, and I want only the instances of x that are not in y. Using %nin%, I would write x[x %nin% y], which leaves you with 1 and 2. I really don't like having to write the object's name twice in a row like that, so I created something to simplify further: %not%. You can now subset x to only the parts that are not in y like this: x %not% y. Conversely, you can do the equivalent of x[x %in% y] using the %just% operator: x %just% y.

As special cases for %not% and %just%, if the left-hand side is a matrix or data frame, it is assumed that the right hand side are column indices (if numeric) or column names (if character). For example, if I do mtcars %just% c("mpg", "qsec"), I get a data frame that is just the "mpg" and "qsec" columns of mtcars. It is an S3 method so support can be added for additional object types by other developers.

wrap_str(), msg_wrap(), warn_wrap(), and stop_wrap()

An irritation when writing messages/warnings/errors to users is breaking up the long strings without unwanted line breaks in the output. One problem is not knowing how wide the user's console is. wrap_str() takes any string and inserts line breaks at whatever the "width" option is set to, which automatically changes according to the actual width in RStudio and in some other setups. This means you can write the error message in a single string across multiple, perhaps indented, lines without those line breaks and indentations being part of the console output. msg_wrap(), warn_wrap(), and stop_wrap() are wrap_str() wrappers (pun not intended) around message(), warning(), and stop(), respectively.

Other changes

Bugfixes

jtools 1.1.1 (2018-09-23)

This is a minor release.

Bug fixes

Other changes

jtools 1.1.0 (2018-08-16)

This release was initially intended to be a bugfix release, but enough other things came up to make it a minor release.

Bug fixes

New features

Interface changes

New functions

jtools 1.0.0 (2018-05-08)

Major release

This release has several big changes embedded within, side projects that needed a lot of work to implement and required some user-facing changes. Overall these are improvements, but in some edge cases they could break old code. The following sections are divided by the affected functions. Some of the functions are discussed in more than one section.

interact_plot(), cat_plot(), and effect_plot()

These functions no longer re-fit the inputted model to center covariates, impose labels on factors, and so on. This generally has several key positives, including

As noted, there is a new data argument for these functions. You do not normally need to use this if your model is fit with a y ~ x + z type of formula. But if you start doing things like y ~ factor(x) + z, then you need to provide the source data frame. Another benefit is that this allows for fitting polynomials with effect_plot() or even interactions with polynomials with interact_plot(). For instance, if my model was fit using this kind of formula --- y ~ poly(x, 2) + z --- I could then plot the predicted curve with effect_plot(fit, pred = x, data = data) substituting fit with whatever my model is called and data with whatever data frame I used is called.

There are some possible drawbacks for these changes. One is that no longer are factor predictors supported in interact_plot() and effect_plot(), even two-level ones. This worked before by coercing them to 0/1 continuous variables and re-fitting the model. Since the model is no longer re-fit, this can't be done. To work around it, either transform the predictor to numeric before fitting the model or use cat_plot(). Relatedly, two-level factor covariates are no longer centered and are simply set to their reference value.

Robust confidence intervals: Plotting robust standard errors for compatible models (tested on lm, glm). Just use the robust argument like you would for sim_slopes() or summ().

Preliminary support for confidence intervals for merMod models: You may now get confidence intervals when using merMod objects as input to the plotting functions. Of importance, though, is the uncertainty is only for the fixed effects. For now, a warning is printed. See the next section for another option for merMod confidence intervals.

Rug plots in the margins: So-called "rug" plots can be included in the margins of the plots for any of these functions. These show tick marks for each of the observed data points, giving a non-obtrusive impression of the distribution of the pred variable and (optionally) the dependent variable. See the documentation for interact_plot() and effect_plot() and the rug/rug.sides arguments.

Facet by the modx variable: Some prefer to visualize the predicted lines on separate panes, so that is now an option available via the facet.modx argument. You can also use plot.points with this, though the division into groups is not straightforward is the moderator isn't a factor. See the documentation for more on how that is done.

make_predictions() and plot_predictions(): New tools for advanced plotting

To let users have some more flexibility, jtools now lets users directly access the (previously internal) functions that make effect_plot(), cat_plot(), and interact_plot() work. This should make it easier to tailor the outputs for specific needs. Some features may be implemented for these functions only to keep the _plot functions from getting any more complicated than they already are.

The simplest use of the two functions is to use make_predictions() just like you would effect_plot()/interact_plot()/cat_plot(). The difference is, of course, that make_predictions() only makes the data that would be used for plotting. The resulting predictions object has both the predicted and original data as well as some attributes describing the arguments used. If you pass this object to plot_predictions() with no further arguments, it should do exactly what the corresponding _plot function would do. However, you might want to do something entirely different using the predicted data which is part of the reason these functions are separate.

One such feature specific to make_predictions() is bootstrap confidence intervals for merMod models.

All interaction tools

You may no longer use these tools to scale the models. Use scale_mod(), save the resulting object, and use that as your input to the functions if you want scaling.

All these tools have a new default centered argument. They are now set to centered = "all", but "all" no longer means what it used to. Now it refers to all variables not included in the interaction, including the dependent variable. This means that in effect, the default option does the same thing that previous versions did. But instead of having that occur when centered = NULL, that's what centered = "all" means. There is no NULL option any longer. Note that with sim_slopes(), the focal predictor (pred) will now be centered --- this only affects the conditional intercept.

sim_slopes()

This function now supports categorical (factor) moderators, though there is no option for Johnson-Neyman intervals in these cases. You can use the significance of the interaction term(s) for inference about whether the slopes differ at each level of the factor when the moderator is a factor.

You may now also pass arguments to summ(), which is used internally to calculate standard errors, p values, etc. This is particularly useful if you are using a merMod model for which the pbkrtest-based p value calculation is too time-consuming.

gscale()

The interface has been changed slightly, with the actual numbers always provided as the data argument. There is no x argument and instead a vars argument to which you can provide variable names. The upshot is that it now fits much better into a piping workflow.

The entire function has gotten an extensive reworking, which in some cases should result in significant speed gains. And if that's not enough, just know that the code was an absolute monstrosity before and now it's not.

There are two new functions that are wrappers around gscale(): standardize() and center(), which call gscale() but with n.sd = 1 in the first case and with center.only = TRUE in the latter case.

summ()

Tired of specifying your preferred configuration every time you use summ()? Now, many arguments will by default check your options so you can set your own defaults. See ?set_summ_defaults for more info.

Rather than having separate scale.response and center.response arguments, each summ() function now uses transform.response to collectively cover those bases. Whether the response is centered or scaled depends on the scale and center arguments.

The robust.type argument is deprecated. Now, provide the type of robust estimator directly to robust. For now, if robust = TRUE, it defaults to "HC3" with a warning. Better is to provide the argument directly, e.g., robust = "HC3". robust = FALSE is still fine for using OLS/MLE standard errors.

Whereas summ.glm, summ.svyglm, and summ.merMod previously offered an odds.ratio argument, that has been renamed to exp (short for exponentiate) to better express the quantity.

vifs now works when there are factor variables in the model.

One of the first bugs summ() ever had occurred when the function was given a rank-deficient model. It is not straightforward to detect, especially since I need to make a space for an almost empty row in the outputted table. At long last, this release can handle such models gracefully.

Like the rest of R, when summ() rounded your output, items rounded exactly to zero would be treated as, well, zero. But this can be misleading if the original value was actually negative. For instance, if digits = 2 and a coefficient was -0.003, the value printed to the console was 0.00, suggesting a zero or slightly positive value when in fact it was the opposite. This is a limitation of the round (and trunc) function. I've now changed it so the zero-rounded value retains its sign.

summ.merMod now calculates pseudo-R^2 much, much faster. For only modestly complex models, the speed-up is roughly 50x faster. Because of how much faster it now is and how much less frequently it throws errors or prints cryptic messages, it is now calculated by default. The confidence interval calculation is now "Wald" for these models (see confint.merMod for details) rather than "profile", which for many models can take a very long time and sometimes does not work at all. This can be toggled with the conf.method argument.

summ.glm/summ.svyglm now will calculate pseudo-R^2 for quasibinomial and quasipoisson families using the value obtained from refitting them as binomial/poisson. For now, I'm not touching AIC/BIC for such models because the underlying theory is a bit different and the implementation more challenging.

summ.lm now uses the t-distribution for finding critical values for confidence intervals. Previously, a normal approximation was used.

The summ.default method has been removed. It was becoming an absolute terror to maintain and I doubted anyone found it useful. It's hard to provide the value added for models of a type that I do not know (robust errors don't always apply, scaling doesn't always work, model fit statistics may not make sense, etc.). Bug me if this has really upset things for you.

One new model type has been supported: rq models from the quantreg package. Please feel free to provide feedback for the output and support of these models.

scale_lm() and center_lm() are now scale_mod()/center_mod()

To better reflect the capabilities of these functions (not restricted to lm objects), they have been renamed. The old names will continue to work to preserve old code.

However, scale.response and center.response now default to FALSE to reflect the fact that only OLS models can support transformations of the dependent variable in that way.

There is a new vars = argument for scale_mod() that allows you to only apply scaling to whichever variables are included in that character vector.

I've also implemented a neat technical fix that allows the updated model to itself be updated while not also including the actual raw data in the model call.

plot_coefs() and plot_summs()

A variety of fixes and optimizations have been added to these functions. Now, by default, there are two confidence intervals plotted, a thick line representing (with default settings) the 90% interval and a thinner line for the 95% intervals. You can set inner_ci_level to NULL to get rid of the thicker line.

With plot_summs(), you can also set per-model summ() arguments by providing the argument as a vector (e.g., robust = c(TRUE, FALSE)). Length 1 arguments are applied to all models. plot_summs() will now also support models not accepted by summ() by just passing those models to plot_coefs() without using summ() on them.

Another new option is point.shape, similar to the model plotting functions. This is most useful for when you are planning to distribute your output in grayscale or to colorblind audiences (although the new default color scheme is meant to be colorblind friendly, it is always best to use another visual cue).

The coolest is the new plot.distributions argument, which if TRUE will plot normal distributions to even better convey the uncertainty. Of course, you should use this judiciously if your modeling or estimation approach doesn't produce coefficient estimates that are asymptotically normally distributed. Inspiration comes from https://twitter.com/BenJamesEdwards/status/979751070254747650.

Minor fixes: broom's interface for Bayesian methods is inconsistent, so I've hacked together a few tweaks to make brmsfit and stanreg models work with plot_coefs().

You'll also notice vertical gridlines on the plots, which I think/hope will be useful. They are easily removable (see drop_x_gridlines()) with ggplot2's built-in theming options.

export_summs()

Changes here are not too major. Like plot_summs(), you can now provide unsupported model types to export_summs() and they are just passed through to huxreg. You can also provide different arguments to summ() on a per-model basis in the way described under the plot_summs() heading above.

There are some tweaks to the model info (provided by glance). Most prominent is for merMod models, for which there is now a separate N for each grouping factor.

theme_apa() plus new functions add_gridlines(), drop_gridlines()

New arguments have been added to theme_apa(): remove.x.gridlines and remove.y.gridlines, both of which are TRUE by default. APA hates giving hard and fast rules, but the norm is that gridlines should be omitted unless they are crucial for interpretation. theme_apa() is also now a "complete" theme, which means specifying further options via theme will not revert theme_apa()'s changes to the base theme.

Behind the scenes the helper functions add_gridlines() and drop_gridlines() are used, which do what they sound like they do. To avoid using the arguments to those functions, you can also use add_x_gridlines()/add_y_gridlines() or drop_x_gridlines()/drop_y_gridlines() which are wrappers around the more general functions.

Survey tools

weights_tests() --- wgttest() and pf_sv_test() --- now handle missing data in a more sensible and consistent way.

Colors

There is a new default qualitative palette, based on Color Universal Design (designed to be readable by the colorblind) that looks great to all. There are several other new palette choices as well. These are all documented at ?jtools_colors

Other stuff

Using the crayon package as a backend, console output is now formatted for most jtools functions for better readability on supported systems. Feedback on this is welcome since this might look better or worse in certain editors/setups.

jtools 0.9.4 (2018-02-13)

This release is limited to dealing with the huxtable package's temporary removal from CRAN, which in turn makes this package out of compliance with CRAN policies regarding dependencies on non-CRAN packages.

Look out for jtools 1.0.0 coming very soon!

jtools 0.9.3 (2018-01-28)

Bugfixes:

jtools 0.9.2

Bugfix:

Feature update:

jtools 0.9.1 (2018-01-05)

Bugfix update:

Jonas Kunst helpfully pointed out some odd behavior of interact_plot() with factor moderators. No longer should there be occasions in which you have two different legends appear. The linetype and colors also should now be consistent whether there is a second moderator or not. For continuous moderators, the darkest line should also be a solid line and it is by default the highest value of the moderator.

Other fixes:

Feature updates:

jtools 0.9.0 (2017-11-12)

This may be the single biggest update yet. If you downloaded from CRAN, be sure to check the 0.8.1 update as well.

New features are organized by function.

johnson_neyman():

interact_plot():

summ():

New functions!

plot_summs(): A graphic counterpart to export_summs(), which was introduced in the 0.8.0 release. This plots regression coefficients to help in visualizing the uncertainty of each estimate and facilitates the plotting of nested models alongside each other for comparison. This allows you to use summ() features like robust standard errors and scaling with this type of plot that you could otherwise create with some other packages.

plot_coefs(): Just like plot_summs(), but no special summ() features. This allows you to use models unsupported by summ(), however, and you can provide summ() objects to plot the same model with different summ() argument alongside each other.

cat_plot(): This was a long time coming. It is a complementary function to interact_plot(), but is designed to deal with interactions between categorical variables. You can use bar plots, line plots, dot plots, and box and whisker plots to do so. You can also use the function to plot the effect of a single categorical predictor without an interaction.

jtools 0.8.1

Thanks to Kim Henry who reported a bug with johnson_neyman() in the case that there is an interval, but the entire interval is outside of the plotted area: When that happened, the legend wrongly stated the plotted line was non-significant.

Besides that bugfix, some new features:

jtools 0.8.0 (2017-10-10)

Not many user-facing changes since 0.7.4, but major refactoring internally should speed things up and make future development smoother.

jtools 0.7.4

Bugfixes:

Enhancements:

jtools 0.7.3 (2017-10-02)

Important bugfix:

New function: export_summs().

This function outputs regression models supported by summ() in table formats useful for RMarkdown output as well as specific options for exporting to Microsoft Word files. This is particularly helpful for those wanting an efficient way to export regressions that are standardized and/or use robust standard errors.

jtools 0.7.2

The documentation for j_summ() has been reorganized such that each supported model type has its own, separate documentation. ?j_summ will now just give you links to each supported model type.

More importantly, j_summ() will from now on be referred to as, simply, summ(). Your old code is fine; j_summ() will now be an alias for summ() and will run the same underlying code. Documentation will refer to the summ() function, though. That includes the updated vignette.

One new feature for summ.lm:

More tweaks to summ.merMod:

jtools 0.7.1 (2017-09-15)

Returning to CRAN!

A very strange bug on CRAN's servers was causing jtools updates to silently fail when I submitted updates; I'd get a confirmation that it passed all tests, but a LaTeX error related to an Indian journal I cited was torpedoing it before it reached CRAN servers.

The only change from 0.7.0 is fixing that problem, but if you're a CRAN user you will want to flip through the past several releases as well to see what you've missed.

jtools 0.7.0

New features:

Bug fix:

jtools 0.6.1

Bug fix release:

jtools 0.6.0

A lot of changes!

New functions:

Enhancements:

Bug fixes:

jtools 0.5.0 (2017-08-08)

More goodies for users of interact_plot():

Other feature changes:

Bug fixes:

jtools 0.4.5 (2017-05-24)

jtools 0.4.4 (2017-03-26)

jtools 0.4.3

jtools 0.4.2 (2017-02-27)